Session Information
22 SES 13 B, Success or Failure? Indicators and Admissions
Paper Session
Contribution
Promoting Higher Education Quality has its purpose to guarantee students a meaningful and effective learning (Endo & Harpel, 1982; Adelman, 1984; McKeachie,1985; Barnett, 1992). In particular, all students are required to acquire competencies, in which multiple elements are integrated, primarly knowledge, together with skills and metacognitive aspects and use them in different contexts (Le Boterf, 1990; Pellerey, 2004).
Creating the conditions so that higher education can effectively promote competences for all students is therefore a requirement of great importance for a higher educational system. In particular if we want to shift the balance axis from mere selection logic - only those students that better adapt to a traditional university learning would have success - towards the logic of promoting authentic skills and effective emancipation in raising the intellectual level of all (or the great part of) students.
The main research questions are the following: how could we study a higher education system in order to control his processes and outcomes quality in terms of promoting learning for all students? Is it possible to build a variable model that lets us study higher education systems quality?
The main purpose of the research is the implementation of an evaluation system that focuses on university teachers and researchers, their ability to analyze higher educational contexts, students’ specific needs (individual and collective) in order to decide and change their way of teaching or supporting students learning.
The research is based on a theoretical framework that refers to international studies in the field of educational evaluation and higher education standards (Beeby, 1977; Joint Commitee, 1981; Wolf, 1987; Bondioli, Ferrari, 2004). In particular, to its formative function (Scriven, 1967; 2003) that involves a systemic approach to evaluation and it’s focused on multiple contextual variables. Moreover, it recognizes the participation of the subjects involved in order to improve their professional development (Fetterman, 2001).
Therefore attention must be paid to an innovation of teaching-learning processes in higher education contexts that is effectively oriented to develop higher skills for all students, and in a widespread way. This involves tackling the widely and articulated problem, the conditions and the aspects that make a university teaching really effective. Teachers should be able to organize learning contexts and to activate teaching strategies based on students characteristics, both in terms of input skills and metacognitive aspects (motivation, problem solving skills, self-efficacy,…).
Nowadays there are some theoretical positions which share and integrate different research paradigms. In particular, the relevance of a rigorous evaluation research approach underlines the complexity of educational contexts and the need of a data interpretation based on a co-construction of meanings among all the actors of the context evaluated, in a participatory and democratic perspective. This kind of educational evaluation approach promotes and supports the collegiality in decision-making processes; House and Howe (Howe&Ashcraft, 2005) call it "democratic and deliberative evaluation". It is based on three inclusion principles for all actors: the importance of everybody interest in the evaluation process; the dialogue between actors in order to explain interests and values; the decision-making capacity shared in a reflexive way among all the stakeholders.Therefore, an educational evaluation approach is primarily a political question: it must take into account different interests, it is never "innocent" (Lincoln, 2003) and entails the need to find strategies to “give voice” to actors’ different meanings and values. Engaging in educational evaluation researches means to be involved in negotiation processes, democratically and ethically oriented, in which each stakeholder has the opportunity to cooperate for the definition of improvement strategies.
Method
In the last twenty years, the international debate (see for example: Datta, 1994; Reichardt and Rallis, 1994; Tashakkori, Teddlie, 1998, 2003; Mertens, 2005) highlighted the need to mitigate the emphasis in which the comparison between "old" and "new" paradigms and the "practicality" of finding possible ways of integration between them. The possibility of integration between two approaches (quantitative-experimental and phenomenological-constructivist) have been supported by many researchers, up to the thesis of the so-called "mixed methods" and "mixed models" (Tashakkori and Teddlie, 1998, Greene, 2008). Based on the "emerging" paradigm of pragmatism, mixed methods propose the thesis of compatibility between different theoretical and methodological approaches (both qualitative and quantitative) within the same research design, according to the needs of the context and of the problems identified by the researchers. In the specific field of educational evaluation, the choice of mixed approaches goes in the direction of ensuring a data collection phase as rigorous and reliable as possible, able to take into account multiple indicators. In regards to the methodological choices related to the specific data collection phase, reference is to a systemic approach, breaking down the university curriculum of a course of study according to the logic of the CIPP Model (Stufflebeam, 1971; 2003), identifying - in the various areas of context-input-process-product- significant variables and indicators. Moreover, the evaluation phase should be based on a view of collegiality between higher education teachers, in order to give voice to stakeholders’ different meanings. First of all, from this methodological point of view, we consider teachers and researchers not as “recipients” for training actions but as actors, able to analyze their educational contexts identifying needs and possible strategies for change. Secondly, the researchers have gradually covered the role of trainers in response to the needs that emerged from the teachers themselves such as communication, assessment and higher education curriculum redesign skills. More specifically, it is possible to imagine the "model" of promoting higher education quality as an open spiral movement where, after a data collection phase, follow each other a data analysis phase, an intersubjective interpretation phase, the identification of training needs, training sessions, shared redesign actions and, again, data collection phase and so on.
Expected Outcomes
Using the CIPP Model (Stufflebeam, 2003) we found a procedure system to study a higher education system in order to control his processes and outcome quality in terms of promoting learning for all students. Referring to the second research question “Is it possible to build a variable model that lets us study higher education systems quality”, we hypothesized two macro-variables that can affect the quality of university teaching: - a curriculum organization that facilitates / supports the acquisition of students’skills; - a teaching group that uses appropriate strategies to promote students’ skills. For the first macro-variable we studied and developed specific research tools and quality indicators such as: - the organization of the teachings with explicit links between disciplines; - an adequate organization of calendars, timetables, classrooms; - a way to share educational choices during professors meeting; - teachers’ participation and sense of self-efficacy; - the transparency of the exam programs and the evaluation criteria on websites. For the second macro-variable we followed a group of thirty higher education teacher that used appropriate strategies to promote students' skills. Through valid indicators, we investigated dimensions linked to the validity of transmitted knowledge, the teachers competence in the construction of expert skills and the care of the metacognitive aspects. Some example of outcome indicators of the macro-variable are outlined as follows: - the teacher has excellent disciplinary preparation and experience in field research (in different contexts); - the teacher uses active and laboratory teaching; - the teacher knows the value and use of metacognitive strategies (use of formative feedback, and assessment, different teaching strategies, use of technologies).
References
Adelman C. (1984). Starting with students. Promising approaches in American higher education. Washington DC: National Institute of Education. Barnett, R. (1992). Improving Higher Education: Total Quality Care. London: Open University Press/ SRHE. Beeby, C.E. (1977). The meaning of evaluation. Current Issues in Education, n. 4, 68-78. Bondioli, A., & Ferrari, M. (2004). Verso un modello di valutazione formativa. Bergamo: Junior. Datta, L. (1994). Paradigm wars. A basic for peaceful coexistence and beyond. In C.S. Reichardt, S.F. Rallis (Eds.), The qualitative-quantitative debate: new perspectives, San Francisco: Jossey-Bass, pp.53-70. Endo J.J., Harpel R.L. (1982). The effect on student-faculty interaction on students’ educational outcomes, Research in Higher Education, 16,2; pp.115-138. Fetterman, D.M. (2001). Foundations of empowerment evaluation: step by step. Thousand Oaks, CA: Sage. Greene, J. C. (2008). Is Mixed Methods Social Inquiry a Distinctive Methodology? Journal of Mixed Methods Research, 2, 1, pp.7-22. Howe, K., Ashcraft, C. (2005). Deliberative democratic evaluation: Successes and limitations of an evaluation of school choice. Teachers College Record, 107(10), 2274-2297. Joint Committee on Standards for Educational Evaluation (1981). Standards for Evaluation of Educational Programs. Projects and Materials. Thousand’s Oaks, CA: Sage. Le Boterf, G. (1990). De la compétence: Essai sur un attracteur étrange. Paris : Les Editions de l’Organisation. Lincoln, Y. S. (2003). Costructivism Knowing, Partecipatory Ethics and responsive Evaluation: a Model for the 21st Century. In T. Kellaghan, D.L. Stufflebeam (Eds.), International Handbook of Educational Evaluation, Dordrecht-Boston-London: Kluwer, pp. 69-78. McKeachie, W.J. (1985). Improving undergraduate education trough faculty development. San Francisco: Jossesy-Bass. Mertens D.M. (2005). Research and evaluation in education and psychology: Integrating diversity with quantitative, qualitative, and mixed methods. Thousand Oaks, CA: Sage Publications. Pellerey, M. (2004). Competenze individuali e portfolio. Scandicci: La Nuova Italia. Reichardt, C.S., Rallis, S.F. (1994). The qualitative-quantitative debate: new perspectives, San Francisco: Jossey-Bass. Scriven M. (1967), The methodology of evaluation. Chicago:Rand McNally. Scriven, M. (2003). Evaluation Theory and Metatheory. In T. Kellaghan, D.L. Stufflebeam (Eds.), International Handbook of Educational Evaluation. Dordrecht-Boston-London: Kluwer, pp.15-30 Stufflebeam D.L. et al. (1971), Educational Evaluation and Decision-Making, Ithaca, Illinois, Peacock. Stufflebeam D . (2003), The CIPP Model for Evaluation, in Kellaghan T., Stufflebeam D.L. (a cura di), International Handbook of Educational Evaluation, Dordrecht-Boston-London, Kluwer, pp. 31-62. Tashakkori, A., Teddlie, C. (1998). Mixed methodology. Combining qualitative and quantitative approaches. Thousand Oaks, CA: Sage Publications. Tashakkori, A., Teddlie, C. (Eds.). (2003). Handbook of Mixed Methods in Social & Behavioral Research. Thousands Oaks, London, New Dehli: Sage Publications.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.