Creative and critical thinking which requires higher-order thinking skills have become one of the core competencies members of the 21st competitive societies are expected to possess. Accordingly, educational institutions are supposed to raise individuals equipped with these competencies, and expectations from universities are relatively much higher than those from institutions of other educational levels most probably due to the fact that their target mass is generally comprised of young adults who are perceived as a sort of investment for the future of their countries. Universities tend to organize their graduate and undergraduate programs considering this particular fact into account. In order to see to what extent they have accomplished the goals and objectives of raising individuals needed by the society, they inevitably employ assessment tools designed to assess the competencies students are expected to gain in theoretical and practical courses. Consequently, assessment types which promote critical and creative thinking skills have started to gain significance especially in higher education. Not surprisingly, written exams are extensively used as conventional tools in most countries with the aim of evaluating students’ performance through questions which primarily aim to measure cognitive aspect of their knowledge. There is an approximate consensus among scholars on the idea that a reliable exam paper should include questions of various difficulty levels that are designed to assess student knowledge in a fair and cautious way. However, it is known that teaching staff working at institutions of higher education are more inclined to come up with exams comprised of relatively less demanding and challenging questions which could be easily responded using lower-order thinking skills. In consequence, students tend to identify the questions requiring practical and reflective knowledge as ‘unfair, illegitimate or even meaningless’ (Black, 1999: 129). Similarly, Killen (2002) points out that teachers should structure learning tasks that will require students to go beyond following simple routines and manipulating knowledge, and test their knowledge accordingly. Existing literature indicates that test-expectancies encourage studying that appropriately matches the demand of the anticipated test (Thiede et al., 2011; Finley & Benjamin, 2012).
Jensen et al. (2014) reported that students who were administered exams including high-level questions gain a deep conceptual understanding of the instructional materials, and tend to retain the knowledge acquired in the courses longer, confirming the proposed hierarchical nature of the Bloom’s taxonomy, which is accepted as a well-established framework offering six core levels of assessment based on thinking patterns they require. More specifically, it is a system that classifies educational objectives based on the level of student understanding needed for achievement or mastery of certain skills. They represent intellectual activity of the six levels: (i) knowledge, (ii) comprehension, (iii) application, (iv) analysis, (v) synthesis, and (vi) evaluation. The first two levels require minimal levels of understanding, and lower-order thinking skills (Zoller, 1993; Crowe et al., 2008). The level of application is identified as an intermediate level (Crowe et al., 2008) while the remaining three levels are considered to require higher-order thinking skills (Zoller, 1993). Taking the above-mentioned literature into account, this study is intended to analyse assessment practices used in higher education with respect to the Bloom’s taxonomy. It specifically deals with examining written assessment practices used in a state university in Turkey, and aims to identify the questions used in these practices based on the taxonomy in concern. Accordingly, two broad research questions were addressed.
- Which level(s) of the Blooms taxonomy are more prevalent in written assessment practices in higher education in Turkey?
- Are higher- or lower-order thinking skills more required in responding to the questions posed in written assessment practices in higher education in Turkey?
Anderson, G. L., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., et al. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Boston: Allyn & Bacon. Black, P. 1999. Assessment, learning theories and testing systems. In Learners, learning and assessment, ed. P. Murphy, 118-134. London: Paul Chapman. Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals . Creswell, J. W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. New Jersey: Pearson Education, Inc. Creswell, J. W., & Plano Clark, V. L. (2011).Designing and conducting mixed methods research. USA: Sage Publications, Inc. Crowe, A., Dirks, C., & Wenderoth, M. P. (2008). Biology in bloom: Implementing Bloom’s Taxonomy to enhance student learning in biology. CBE - Life Sciences Education, 7, 368 –381. Finley, J. R., & Benjamin, A. S. (2012). Adaptive and qualitative changes in encoding strategy with experience: Evidence from the test-expectancy paradigm. Journal of Experimental Psychology: Learning Memory and Cognition, 38(3), 632–652. Geesje van den Berg (2004) The use of assessment in the development of higher-order thinking skills, Africa Education Review, 1:2, 279-294, DOI: 10.1080/18146620408566285 Killen, R. 2002. Productive pedagogy. Unpublished manuscript, University of Newcastle, Australia. Tashakkori, A., & Teddlie, C. (1998).Mixed methodology: Combining qualitative and quantitative approaches. USA: Sage Publications, Inc. Thiede, K. W., Wiley, J., & Griffing, T. D. (2011). Test expectancy affects metacomprehension accuracy. British Journal of Educational Psychology, 81, 264-273. Jensen, J. L., McDaniel, M. A., Woodard, S. M., & Kummer, T. A. (2014). Teaching to the test… or testing to teach: exams requiring higher order thinking skills encourage greater conceptual understanding. Educational Psychology Review, 26(2), 307-329. Zoller, U. (1993). Are lecture and learning compatible? Maybe for LOCS: unlikely for HOCS (SYM). Journal of Chemical Education, 70, 195–197.
00. Central Events (Keynotes, EERA-Panel, EERJ Round Table, Invited Sessions)
Network 1. Continuing Professional Development: Learning for Individuals, Leaders, and Organisations
Network 2. Vocational Education and Training (VETNET)
Network 3. Curriculum Innovation
Network 4. Inclusive Education
Network 5. Children and Youth at Risk and Urban Education
Network 6. Open Learning: Media, Environments and Cultures
Network 7. Social Justice and Intercultural Education
Network 8. Research on Health Education
Network 9. Assessment, Evaluation, Testing and Measurement
Network 10. Teacher Education Research
Network 11. Educational Effectiveness and Quality Assurance
Network 12. LISnet - Library and Information Science Network
Network 13. Philosophy of Education
Network 14. Communities, Families and Schooling in Educational Research
Network 15. Research Partnerships in Education
Network 16. ICT in Education and Training
Network 17. Histories of Education
Network 18. Research in Sport Pedagogy
Network 19. Ethnography
Network 20. Research in Innovative Intercultural Learning Environments
Network 22. Research in Higher Education
Network 23. Policy Studies and Politics of Education
Network 24. Mathematics Education Research
Network 25. Research on Children's Rights in Education
Network 26. Educational Leadership
Network 27. Didactics – Learning and Teaching
The programme is updated regularly (each day in the morning)
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.