Session Information
09 SES 04 B, Assessments in Higher Education
Paper Session
Contribution
Competency-based learning in Spain has achieved a reconfiguration of higher education to meet the current demands of training. In this regard, institutions and different university regulations promote a "constructive alignment" of the essential elements from the curriculum (Biggs, 2005), giving to the competencies the role of curricular model which gives coherence to the new design of degrees.
However, it's enough to review the majority of Educational Programming to detect that it has not been resolved its inclusion in the curriculum (Mateo y Vlachopoulos, 2013) or that, in the most of the case, It only involves comply with the mere process of enunciating a set of competencies (Escudero, 2008), without any strategic-methodological approach in how to integrate the different subjects.
Moreover, the educational proposals made by the European Higher Education Area (EHEA), such as the use of active methodologies, the self-regulated and independent learning-oriented teaching, the diversification of learning activities (simulations, portfolios, forums), along with the consideration of the multidimensional competencies, require new evaluation tools more dialogic and more compressive than the traditional paper and pencil tests (Ibarrra y Rodríguez-Gómez, 2010; Padilla y Gil Flores, 2008).
In the university context, scoring rubrics are considered an innovation tool to collect evidences of competency acquisition (Baryla, Shelley and Trainor, 2012, White, 2011; Cebrián, 2012; Andradre and Reddy, 2010). Its potential lies in the ability for issuing adjusted valuations about the quality of student’s works in a wide range of subjects or tasks (Blanco, 2008).
In the student's new models competency development, the curriculum isn’t structured as thematic units, but by learning activities (Mateo, 2006). Regarding the activities or tasks evaluated, a question arises: what activities are assessed with assessment rubrics? Are the usual assimilative and reproductive activities, typical of the traditional approach, or more focused tasks to organize and share information, participate in a simulation, self-evaluation a report, etc..?
To get the answers we took as reference the following classification of activities given by Marcelo, Yot, Mayor, Sánchez, Murillo, Rodríguez-Lopez and Pardo (2014): assimilative, information management, application, communication, production, experiential and evaluative.
Moreover, Villa y Poblete (2011, 151), point out that “the difficulty in the evaluation of competencies can be very different depending on the same competencies because some of them are more 'saturated' with knowledge, skills and values than others”. As a result, considering the concept and classification of generic competence by Tuning Project (González and Wagennar, 2003), we propose the following question: What kind of generic competencies are more assessed with rubrics?
Regarding the type of rubric, Blanco (2008, 176) notes that "the selection form one kind or another of rubric depends mainly on the use you want to give to the outcomes, that is, if the emphasis is more focused on the formative aspects or on the summative ones. Other factors to consider: the required time, the nature of the task itself or the specific performance criteria which are being observed”. Based in these assumptions it is studied: What kinds of rubrics are used by teacher: analytic (formative) or holistic (summative)? Do professors know the pedagogical and techniques requirements needed to design rubrics?
These and other issues represent the educational paradigm change with respect to new approaches and tools for evaluating competencies. In this theoretical framework it is set up a research in order to know the aims pursued by professors when designing an evaluation rubric. In addition, we analyze the types of rubrics that professors use to support and guide the teaching and learning processes.
Method
Expected Outcomes
References
Baryla, E., Shelley, G. & Trainor, W. (2012). Transforming Rubrics Using Factor Analysis. Practical Assessment, Research & Evaluation, 17 (4). Biggs, J. B. (2005). Calidad del aprendizaje universitario. Madrid: Narcea. Blanco, A. (2008). Las rúbricas: un instrumento útil para la evaluación de competencias, en L. Prieto (Ed.). La enseñanza universitaria centrada en el aprendizaje. (pp. 171-188). Barcelona: Octaedro. Blanco, A. (2011). Tendencias actuales de la investigación educativa sobre las rúbricas. En K. Bujan, I. Rekelde, & P. Aramendi, La evaluación de competencias en la Educación Superior. Las rúbricas como instrumento de evaluación. (pp. 59-74). Seville: MAD. Cebrián, M. (2012). (Ed.). Presentación. Congreso Internacional de Evaluación mediante e- Rúbrica. Málaga: University of Malaga. Escudero, J. M. (2008). Las competencias profesionales y la formación universitaria: posibilidades y riesgos. REDU, 1. González, J, & Wagenaar, R. (2003). Tuning Educational Structures in Europe. Informe final Fase 1. Bilbao: Universidad de Deusto. Ibarra M.S & Rodríguez-Gómez, G. (2010). Aproximación al discurso dominante sobre la evaluación del aprendizaje en la Universidad. Revista de Educación, 351, 385-407. Marcelo, C., Yot, C., Mayor, C., Sánchez, M. Murillo, P., Rodríguez-López, J. M. & Pardo, A. (2014). Las actividades de aprendizaje en la enseñanza universitaria: ¿hacia un aprendizaje autónomo de los alumnos? Revista de Educación, 363, 334-359. Mateo, J. (2006). Claves para el diseño de un nuevo marco conceptual para la medición y evaluación educativas. Revista de Investigación Educativa, 24 (1), 165-189. Mateo, J. & Vlachopoulos, D. (2013). Evaluación en la universidad en el contexto de un nuevo paradigma para la educación superior. Educación XXI: Revista de la Facultad de Educación, 16 (2), 183-207. Padilla, M. T. & Gil, J. (2008). La evaluación orientada al aprendizaje en la Educación Superior: condiciones y estrategias para su aplicación en la docencia universitaria. Revista Española de Pedagogía, 241, 467-486. Santos, M. A. (1999). 20 paradojas de la evaluación del alumnado en la Universidad española. Revista Electrónica Interuniversitaria de Formación del Profesorado, 2 (1). Reddy, Y. M. & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35 (4), 435-448. Tójar, J. C. (2006). Investigación cualitativa. Comprender y actuar. Madrid: La muralla. Villa, A. & Poblete, M. (2011). Evaluación de competencias genéricas: principios, oportunidades y limitaciones. Bordón, 63 (1), 147-170.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.