Session Information
09 SES 01 C, Assessment Practice and Competency Development: Roles and Perceptions of Students and Teachers
Paper Session
Contribution
An accountability and performativity wave is currently sweeping the world. International studies and national evaluations are being used in the political debate. Many school systems are influenced by concepts, models and various forms of protocol from a new public management discourse. However, focusing on what caused the downward trend does not give answers to what needs to be done to reverse the negative trend of students' performance. Using assessment for learning may be one way to enhance students’ learning. Assessment for learning has been a research area of interest for many years. Though research studies are extensive not many focus on student perspectives.
This study brings forth students’ voices on how to make assessment practices accessible and tailored to meet students’ needs and to support their learning. The study also problematizes application of assessment methods without prior considerations of embedded structures and pre-conditions that enable or constrain assessment practices. These structures and pre-conditions emanate from several practices integrated in the school context. This study therefore –besides focusing on teaching and learning –also ‘zoom in’ on the practices of professional learning, leading and researching.
The study aims to answer the questions:
What enables or constrain development of accessible and reciprocal assessment practices?
- In what ways may interconnected practices enable or constrain development of accessible and reciprocal assessment practices?
The theoretical framework consists of a nomological network of assessment, practice and validity theories that form a construct of valid assessment practices. Empirical data is validated against the construct in order to understand what enables or constrains students and teachers when attempting to develop accessible and reciprocal assessment practices.
The construct builds on prior research that brings forth the importance of clarified goals, feedback at the process or regulatory level – based on solid diagnosis of student performances related to intended learning outcomes – and comprehensive suggestions on how to proceed. Drawing on the intended and non-intended impact on students, access and agency in assessment practices are also focused.
Assessment practices are here understood as practices closely connected to planning, enacting and evaluating teaching and learning activities. Assessment consists of interactive, dynamic and collaborative activities that are integrated in teaching activities and connected to classroom practice. These activities affect our understanding of learning, the learner and what is supposed to be learnt. Teachers need to clarify what students are expected to do with their abilities, and create qualitative learning practices where teaching and learning activities stimulate these abilities and enhance student participation. Assessment practices therefore need to be accessible for students and tailored to meet students’ needs.
When developing practices, structures and pre-conditions that embed those practices need to be considered. Practices can be understood as human activities where language, activities and relationships hang together in specific ways. A practice is located in space and time and is dependent on cultural-discursive, material-economic and socio-political arrangements that enables or constrains activities taking place within the practice. Practices (e.g. teaching) can be affected by interconnected practices (e.g. professional learning, leading), and are enmeshed with pre-conditions that enable and constrain them.
Validity theories give guidance on whether interpretations, decisions and actions in assessment practices are valid. Validation is related to the intent to improve learning whereas validity dependent on how well intentions are achieved. Within this study, validation is used to view the intention to enhance learning through assessment practices. The validity framework highlights enablings and constraints that affect students’ possibilities to access assessment practices. Valid assessment practices are dependent on the degree to which students are able to reach the learning objectives, and on the intended and non-intended impact of assessment on students’ learning.
Method
Expected Outcomes
References
Black, Paul & Wiliam, Dylan (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, Vol. 1. (27 sidor) Cronbach, Lee J. (1971). Test Validation. In Robert L. Thorndike (Ed.), Educational Measurement (Second edition, pp. 443-507). Washington, D. C.: American Council on Education. Gipps, Caroline (1999). Socio-Cultural Aspects of Assessment. Review of Research in Education 24, 355-392. Kane, Michael T. (2006). Validation. In Robert L. Brennan (Ed.), Educational Measurement (Fourth edition, pp. 17-64). Westport CT: American Council on Education/Praeger Publishers. Kane, M. T. (2013). Validating the Interpretations and Uses of Test Scores. Journal of Educational Measurement, 50(1), 1-73 Kemmis, S. & Grootenboer, P. (2008). Situating praxis in practice: practice architectures and the cultural, social and material conditions for practice. In S. Kemmis, and T.J. Smith (Eds) enabling praxis: challenges for education. Rotterdam: Sense Publishers. Kemmis, S., Wilkinson, J., Edwards-Groves, C., Hardy I., Grootenboer, P., & Bristol, L. (2014). Changing Practices, Changing Education. Singapore: Springer. Messick, Samuel A. (1989). Validity. I Robert L. Linn (Ed.), Educational Measurement (Third edition, pp. 13-103). New York: American Council on Education/Macmillan. Moss, P., Girard B. J. & Haniford L. C. (2006). Validity in Educational Assessment. In: Review of research in Education, 30(1), 109-162. Nicolini, D. (2013). Practice theory, work & organization. Oxford: Oxford university Press. Nusche, D., et al. (2011). ”OECD Reviews of Evaluation and Assessment in Education – Sweden”. Paris: OECD. OECD (2013). PISA 2012 Results in Focus. What 15-year-olds know and what they can do with what they know. Paris: OECD. Sadler, Royce. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144. Schatzki, Theodore (2010). The timespace of human activity. On performance, society, and history as indeterminate teleological events. Lanham, Maryland: Lexington Books. 278 s. Schuell, T. (1986). Cognitive conceptions of learning. Review of educational research. 56(4), 411-436. Scriven, M. (1967) The methodology of evaluation. In R. Tyler, R. Gagna and M. Scriven (1967) Perspectives on Curriculum Evaluation (AERA Monograph Series – Curriculum Evaluation) (Chicago, Rand McNally and Co). Stobart, Gordon (2012). Validity in Formative Assessment. In John Gardner (Ed.), Assessment and Learning (Second edition; pp. 233-242). London: Sage Publications. Taras, Maddalena. (2005). Assessment – Summative and Formative – Some Theoretical Reflections. British Journal of Educational Studies, Vol.53, No 4, December 2005, pp466–478 Taylor, P., Fraser, B., & Fisher, D. (1997). Monitoring constructivist classroom learning environments. International Journal of Educational Research, 27, 293-301.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.