03 SES 03 A, Role of Curriculum Policy Steering Documents
The legitimacy of an educational system depends on a number of fundamental assumptions. For example, the central authorities, those who educate, those who are educated, and the surrounding society are all assumed to have trust in that knowledge requirements are based on some form of rationality. A basic pillar for such rationality is constituted by the notion of a demanded progression within the educational system. This study focuses on progression expressed as knowledge criteria, from primary and secondary school to higher education. The first is constructed by the National Agency for Education. In higher education the university teachers stands for the formulations, although within a framework, regulated by the Higher Education Ordinance. The main question is: How widespread and established is the presence of progression in the knowledge criteria of the Swedish educational system? The study is thereby not connected to the students´ individual knowledge development, but to the criteria they have to fulfill during the education training. Studies that rank knowledge criteria within the national educational system are rare in both national and international contexts. While assessment research has grown in scope during the last decades, research on knowledge progression is still at an initial promising stage, despite the fact that both national and transnational educational policies have long been striving for more homogeneous and comparable measures of learning outcomes (see for example Meyer & Benavot 2013; Nordin & Sundberg 2014). Several studies are related to this issue: Näsström (2009) has compared the teachers’ with the assessment experts’ interpretations of Bloom’s revised taxonomy, and Sellbjer (2015) has studied how university teachers understand and/or misunderstand the assignments that they themselves have formulated. Susibo and Nomlomo (2014) have shown how a lack of clarity in the definitions of standards contributes to a reproduction of unequal educational outcomes. Stemler (2004) has in a more overarching way taken up the concept of interrater reliability, showing how it is often incorrectly treated as a uniform concept. Stemler shows how instead, different approaches in evaluating interrater reliability carry particular implications on how the assessment between different groups can be understood. Much of the research on knowledge progression focuses on the practical management of the different taxonomies and matrixes in the school. Studies that focus on ranking the learning objectives almost unanimously point to American psychologist Benjamin Bloom (Bloom, 1956) or Anderson, Krathwohl and Bloom’s (2001) revised version of Bloom’s original system which they deemed too one-dimensional. Thus, few studies focus on the very condition for this treatment, or on the concerned parties’ understanding of educational and knowledge objectives. This absence of research applies to progression both within and among the different educational levels. Through the critical examination of knowledge progression in the Swedish educational system the present study aims to contribute new knowledge within this growing research field. With regard to empirical studies where ranking has been used as a method, there is a sociological study that can be mentioned about how prominent sociologists ranked learning objectives in an introduction course to sociology (Persell, 2010). Other examples are Michael and McFarland (2011), who have asked physiologists who work with medical education to rank “the big ideas” in physiology, and Graham (2010), where a ranking is carried out on which administrators’ tasks are the most important for the students.
In order to answer the question of how widespread and established the presence of progression is in the knowledge criteria of the Swedish educational system a quantitative and qualitative method were used. In the quantitative 33 professors in twelve subjects were asked to rank twelve criteria derived from the subject within which they have expert knowledge. The professors were chosen because they, if any, are assumed to be best placed to recognize the complexity of what is required to achieve each criterion. The knowledge criteria were taken from grades 6 and 9 (primary school) and high school, as well as descriptions of aims in Bachelor and Master Degree as well as Doctoral studies. Leading words were replaced with more neutral and the criteria were mixed randomly. The professors were instructed to rank the criteria from 1 to 12 as to how complex they could be understood. In the next step the correlation between professors’ ranking and the knowledge requirements from grade 6, 9 and so forth were calculated. In order to receive a more elaborated answer to the research question an additional qualitative analysis was carried out, using Bloom's Taxonomy of Educational Objectives for Knowledge-Based Goals (Bloom, 1956). The taxonomy is structured along six different levels of expertise beginning with knowledge as the lowest level, followed by comprehension, application, analysis, synthesis and evaluation as the highest level. Each level is defined by a limited number of verbs describing the objectives that should be acquired in order to qualify for that specific level. At the level of knowledge the objectives has to do with aspects such as recognition of ideas and procedures while evaluation includes making judgements of ideas and methods using a variety of resources. The syllabuses have been analysed as to what extent they fulfil the requirements of a certain level of expertise as expressed through its defining verb, ranging from one to three, where one stands for not present at all, two for present to some extent and three for fully present. The analysis has been carried out in three subjects. Comparing the results from each of the course syllabus studied in relation to Blooms taxonomy made it possible to visualize the knowledge progression between the different levels, as expressed through their educational objectives.
In the first study in which professors were asked to rank twelve knowledge criteria, the correlation is, with the exception of the subject English, in average 0.11. In History the correlation is -0.35 and in Biology -0,49, that is on the border of being in reverse. The results are remarkable, since they show no correlation between the complexity in knowledge criteria and the level from which the criteria are drawn. The second study shows that the third grade students, following the range described in Bloom’s model, move from “factual knowledge” towards “implementation”, while those who obtain the highest grade in the ninth grade are also able to achieve “synthesis”. In total, five out of six of the verbs are used in the primary school. It also emerged that all the verbs in Bloom’s taxonomy were used in each of the educational forms up to doctoral studies. Authors of knowledge criteria thus only fulfil the progression within the level that can be overlooked. This means that the student, in terms of verbs, so to say restarts at each level, regarding the complexity in the knowledge that can be achieved. The study gives one answer to the question why the outcome of the first study became so remarkable. The result of the two studies manifest the difficulties to create a uniform understanding of knowledge progression within an educational system and generates additional data around the understanding of the educational system’s knowledge progression. However, at the same time it contributes to a problematization of this because of the result-oriented educational system’s logic, where the comparability of results within and among the different levels in the educational system is of central importance.
Anderson, L. W., Krathwohl, D. R., & Bloom, B. S. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives. Allyn & Bacon. Bloom, B. S. (1956). Taxonomy of educational objectives. Vol. 1: Cognitive domain. New York: McKay Graham, C. (2010). Hearing the voices of professional staff: professional staff perspectives on their contributions to student outcomes. Journal of Higher Education Policy and Management. 32(3) 213-223. Meyer, H-D. & Benavot, A (Eds.) (2013). PISA, Power, and Policy, the emergence of global educational governance. Oxford: Symposium Books. Michael, J. & McFarland, J. (2011). The core principles (“big ideas”) of physiology: result of faculty surveys. Avd Physiod Educ (35) 336-341. Nordin, A & Sundberg, D (Eds.) (2014). Transnational policy-flows in European education – the making and governing of knowledge in the education policy field. Oxford: Symposium books. Näsström, G. (2009). Interpretation of standards with Bloom’s revised taxonomy: a comparison of teachers and assessment experts. International Journal of Research & Method in Education, 32 (1) 39–51 Persell, Caroline Hodges (2010). How Sociological Leaders Rank Learning Goals for Introductory Sociology. Teaching Sociologi 38(4) 330-339. Sellbjer, S. (2015): Meaning in constant flow – university teachers’ understanding of examination tasks, Assessment & Evaluation in Higher Education, DOI:10.1080/02602938.2015.1096900 Sellbjer, S. (2018) (unpublished manuscript). Från sexan till forskarutbildning – rangordning av kunskapsmål och målbeskrivningar i det svenska utbildningssystemet. Sosibo, L. & Nomlomo, V. (2014). Teachers' conceptions of standards in South African Basic Education and Training: A case study. Perspectives in Education, 32(1) 73-87. Stemler, S. E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research & Evaluation, 9(4). Strauss, A. & Corbin, J. (1998). Basics of Qualitative Research. Techniques and procedures for developing grounded theory. Thousand Oaks: Sage publications. Thornberg, R. (2012). Informed grounded theory. Scandinavian Journal of Educational Research 56(3) 243-259
00. Central Events (Keynotes, EERA-Panel, EERJ Round Table, Invited Sessions)
Network 1. Continuing Professional Development: Learning for Individuals, Leaders, and Organisations
Network 2. Vocational Education and Training (VETNET)
Network 3. Curriculum Innovation
Network 4. Inclusive Education
Network 5. Children and Youth at Risk and Urban Education
Network 6. Open Learning: Media, Environments and Cultures
Network 7. Social Justice and Intercultural Education
Network 8. Research on Health Education
Network 9. Assessment, Evaluation, Testing and Measurement
Network 10. Teacher Education Research
Network 11. Educational Effectiveness and Quality Assurance
Network 12. LISnet - Library and Information Science Network
Network 13. Philosophy of Education
Network 14. Communities, Families and Schooling in Educational Research
Network 15. Research Partnerships in Education
Network 16. ICT in Education and Training
Network 17. Histories of Education
Network 18. Research in Sport Pedagogy
Network 19. Ethnography
Network 20. Research in Innovative Intercultural Learning Environments
Network 22. Research in Higher Education
Network 23. Policy Studies and Politics of Education
Network 24. Mathematics Education Research
Network 25. Research on Children's Rights in Education
Network 26. Educational Leadership
Network 27. Didactics – Learning and Teaching
The programme is updated regularly (each day in the morning)
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.