Session Information
09 SES 10 B, Formative and Summative Assessments
Paper Session
Contribution
This paper presents an exploratory research aimed at building a methodological model called the ARCA Model (Assessment, Rubrics, Certification of Achievements Model) for the certification of achievements in primary and lower secondary Italian schools. Given the central role that the competences have assumed within the school curriculum in many European countries (Recommendation of the European Parliament and of the Council, 18 June 2006), it becomes important to develop methodological models to certify students’ learning outcomes. The theoretical model that inspired this research is authentic assessment, that, as written by McClelland (1994), Gardner (1992), Glaser and Resnick (1989), aims to develop multidimensional methods of assessment which are able to overcome the rigidity that, sometimes, is attributed to the assessment through testing. In this case the task of assessment is not so much to measure learning, but to provide information on the processes that generate learning and how the knowledge acquired is put into practice through effective behaviours spendable both inside and outside the school. Authentic assessment focuses on how the student builds up personal learning operating actively in different situations, rather than on limiting assessment to the standardization of results. In this sense, it can promote, also in the school context, a new way of thinking about assessment by referring to direct forms of performance assessment: authentic assessment does not assume any predictive or projective function, but evaluates the action produced directly in the field for what it is, therefore, learning is seen as a product of contextualized knowledge, transferable in similar situations of use (near transfer) (Worthen, 1992; Chase, 1999, Wiggins, 1990). In this sense, authentic assessment is perceived as a form of assessment for learning.
According to this theoretical perspective (Shepard, 1991; Stiggins, 1994; Wiggins, 1993), authentic assessment: 1. Is based on real tasks and not on evidence which have a predictive value; 2. requires judgment and innovation, as it leads to the solution of problems that may have more than one right answer or multiple ways of solutions; 3. asks the student to participate in the construction of knowledge, identifying, recognizing and processing the main structures of the school-subjects; 4. requires the effective use of a repertoire of knowledge and functional skills to deal with complex tasks; not just to show the amount and extent of knowledge, skills and competences acquired, but to highlight the plasticity, integration, connectivity of knowledge among them and the surrounding reality; 5. gives the opportunity to select, repeat, test patterns of action, check resources, get feedback and improve performance by increasing levels of mastery (performance-feedback-revision-performance).
Authentic assessment aims to provide feedback on products and processes of learning. In this way it allows information related to the capacity of critical thinking, problem solving, metacognition, working efficiency and reasoning (Arter & Bond, 1996) to be collected. To do this, "authentic tasks" or "reality tasks" are used. An authentic task requires the use of internal capabilities and knowledge, skills and competences that students have learned at school or in other non-formal/informal educational contexts. Authentic assessment is, therefore, founded on the belief that academic achievements are not given by the accumulation of knowledge, rather, they are based on the ability to generalize, modeling, identifying relationships, transfer acquired knowledge in real contexts. Thus, assessment and certification of achievements are closely related to highlighting how students’ knowledge has generated competences that can be used effectively in multiple contexts and learning situations.The research question of this study, therefore, is how to develop methodological models that can support teachers in the certification of achievements acquired by learners, so that they can be recognized in subsequent grades of schooling and in the world of professions.
Method
Expected Outcomes
References
Arter, J., & Bond, L. (1996). Why is assessment changing. In Blum, R.E., & Arter, J.A. (Eds.) (1996). A handbook for student performance assessment in an era of restructuring. Alexandria (VA): Association for Supervision and Curriculum Development. Bennett, R.E., Jenkins, F., Persky, H., & Weiss A. (2003). Assessing complex problem-solving performances. Assessment in Education: Principles, Policy and Practice, vol. 10, n. 3, November, 347-365. Brooks, J.G., & Brooks, M. (1999). The case for constructivist classrooms. Alexandria (VA): Association for Supervision and Curriculum Development. Brown, J.S., Collins, A., & Duguid, P. (1996). Situated cognition and the culture of learning. In McLellan, H. (Eds.) (1996). Situated learning perspectives, Englewood Cliffs (NJ): Educational Technology Publications. Chase, C.I. (1999). Contemporary Assessment for Educators. Reading (MA): Longman. Creswell, J.W., & Plano Clarke, W.L. (2011). Designing and conducting mixed methods research. Thousand Oaks: Sage Publication. Danielson, C., & Hansen, P. (1999). A collection of performance tasks and rubrics. Larchmont (NY): Eye On Education. Darling-Hammond, L. (1994). Performance assessment and educational equity. Harvard Educational Review, 64, (1). Gardner, H. (1992). Assessment in Context: The Alternative to Standardized Testing. In Gifford, B.R., & O’Connor, M.C. (1992). Changing Assessments. Alternative Views of Aptitude, Achievement and Instruction. Boston: Kluwer Academic Publishers. Glaser, R., & Resnick, L.B. (Ed.) (1989). Knowing, learning and instruction: Essays in honor of Robert Glaser. Hillsdale (NJ): Erlbaum. Goodrich, H. (1996). Understanding rubrics. Educational Leadership, 54, (4). Gredler, E. (1999). Classroom Assessment and Learning. Reading (MA): Longman. Hart, D. (1994). Authentic assessment. A Handbook for Educators. Menlo Park (CA): Addison-Wesley. McClelland, D.C. (1994). The knowledge testing-educational complex strikes back. American Psychologist, 1. McTighe, J., & Wiggins G. (2004). Understanding by design. Professional development workbook, ASCD, Alexandria (VA): Association for Supervision and Curriculum Development. OECD (2002). The definition and selection of key competencies (DeSeCo): theorical and conceptual foundations, Strategic paper, 7 ottobre 2002. Shepard, L.A. (1991). Psychometricians’ beliefs about learning. Educational Researcher, 20. Stevens, D.D., & Levi, A.J. (2005). Introduction to rubrics. An assessment tool to save grading time, convey effective feedback and promote student learning. Sterling (VA): Stylus. Stiggins, R.J. (1994). Student-centered classroom assessment. New York: Macmillan. Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research & Evaluation, 2, (2). Wiggins, G. (1993). Assessing student performance: Exploring the purpose and limits of testing. San Francisco (CA): Jossey-Bass. Worthen, B.R. (1992). Measurement and Assessment in Schools. Reading (MA): Longman.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.