22 SES 04 B, Interactive Poster Session
Interactive Poster Session
The Higher Education system is considered to be a cornerstone in order to build a European democratic society within the context of an increasingly complex world. This has implications for curriculum design, including assessment (Almeida & Castro, 2017), which is seen as a key factor for the quality of teaching, for the learning process, and for the academic results. Existing literature shows that teaching and learning practices may influence teachers’ views of assessment (Pereira & Flores, 2016; Samuelowicz, & Bain, 2002; Prosser & Trigwell, 1998). The way teachers look at the teaching and learning process, as well as assessment, influences the way they teach, the way students learn, and their assessment practices (Pereira & Flores, 2016; Fletcher et al., 2012; Brown, 2004).
In higher education, traditional and prescriptive assessment methods are often used (mainly the written test or exam) pointing to the influence of the grading system within a hierarchical logic (Perrenoud, 1999; Pereira & Flores, 2012). Other perspectives of assessment such as Assessment for Learning (McDowell, Wakelin, Montgomery & King, 2011) imply that students are not mere consumers of lessons and tests, assuming more responsibility in the learning and assessment processes (Flores & Veiga Simão, 2007; Pereira & Flores, 2012).
The Bologna Process enhanced the importance of the learning process, learning outcomes, but also the development of soft skills which will enhance the success at the labour market (Dochy, Segers & Sluijsmans, 1999). These have implications for teaching methodologies and student-centered assessment methods (Webber, 2012). In this context there is an emphasis on alternative assessment methods that highlight professional autonomy, collaboration, and accountability, ensuring constructive feedback, interaction with peers and knowledge construction (Webber, 2012; Pereira & Flores, 2013).
It is widely accepted that assessment practices should include the active participation of the students within a formative perspective integrated with the teaching and learning process (Fernandes, Flores & Lima, 2012; European Commission, 2013). This assumption is in line with existing literature about the crucial role of feedback in assessment and learning process (Black & Wiliam, 1998; Hattie & Timperley, 2007; Carless, Salter, Yang & Lam, 2011; Kyaruzi, Strijbos & Brown, 2018), in particular the so-called learning-oriented assessment (Tang & Chow, 2007; Carless, 2009, 2015). This view is seen as a pathway to the construction of professional knowledge and self-regulated learning with implications for teaching practices (Bergh, Ros & Beijaard, 2015). Learning-oriented assessment and peer assessment emerge as basic building blocks to promote “productive student learning” (Carless, 2009, p. 80).
This paper examines the most valued methods of assessment from the perspective of university teachers in five Portuguese public universities after the implementation of the Bologna Process. It aims to contribute to better understand the assessment process as well as the implications of the Bologna Process for teaching and learning practices.
This paper presents findings from a broader study on the most valued assessment methods from the perspective of Portuguese Higher Education teachers. Data were collected through a questionnaire adapted from Pereira (2011; 2016); Gonçalves (2016) and Brown (2006). The questionnaire was applied between February and July of 2017 in five Portuguese public universities. A convenience sample was used (Marôco, 2010; Coutinho, 2014). It consisted of 185 professors from all professional categories and programmes within the following scientific areas: Medical and Health Sciences, Exact Sciences, Engineering and Technology, Social Sciences and Humanities. Data were analysed through SPSS (Statistical Package for the Social Sciences, v. 24.0). Ethical issues in educational research internationally were considered, including the approval from the part of the University of Minho Ethics Committee (SECSH 035/2016 & SECSH 036/2016). In order to evaluate the internal validity of the instrument, a statistical validation was implemented using the exploratory factor analysis (principal component analysis) (Sofroniou & Hutcheson, 1999; Hair et al., 1998.). Reliability statistics were calculated using Cronbach's alpha (Cortina, 1993; Hair et al., 1998). The analysis allowed to proceed with a matrix based on three factors: 1) Collective Methods; 2) Portfolios and Reflections; and, 3) Individual Methods. The reliability tests dictated the exclusion of one factor. However, given the relevance of the item - "Tests / Written Exams" – on the participants' responses and also on national and international studies (e.g. Pereira, 2011, 2016, Flores et al., 2015, Meyers & Meyers, 2014; Sambell & McDowell, 1997, it was decided to treat it as an observable variable. Multivariate analysis of variance (MANOVA) and non-parametric tests were used to test some statistical hypotheses, in order to get to know the possible influence of the variables in the analysed scale: gender; age; professional experience; professional category; pedagogical training; programmes of study; scientific area; changes in assessment practices; and influence of the Bologna Process in the assessment practices.
The results of the descriptive statistics revealed that the "Tests/Written Exams" are the most valued methods (Me=3.24, DP=0.784) and that "Portfolios and Reflections" are the least valued methods (Me=2.05, DP= 0.901). The data also show a positive trend in the valorisation of both the individual methods (Me=2.67; DP=0.868) and collective methods (Me=2.60, DP=0.881). The statistics presented are based on cases with no missing values for any dependent variable or factor used. There were 163 cases (88.1% of all cases). Hypothesis testing carried out through the Multivariate analysis of variance (MANOVA) revealed statistically significant differences, mainly regarding the valorisation of the different assessment methods according to the study cycles, and the scientific areas. Regarding “Tests / Written Exams", the results of non-parametric tests revealed the existence of statistically significant effects for the variables: "pedagogical training"; “scientific area”; and, "changes in assessment practices". These and other issues will be explored further in the paper.
Black,P., & Wiliam,D.(1998), Assessment and classroom learning, Assessment in Education, 5(1), 7-75, DOI: https://doi.org/10.1080/0969595980050102. Brown, G.T.L.(2004). Teachers’ conceptions of assessment: Implications for policy and professional development. Assessment in Education: Policy, Principles and Practice, 11(3), 305-322. DOI:10.1080/0969594042000304609 Brown, G.T.L.(2006). Teachers' conceptions of assessment inventory--Abridged (TCoA-IIIA-Version 3-Abridged). Unpublished test. Auckland, NZ: University of Auckland. Carless, D.(2009). Trust, distrust and their impact on assessment reform, Assessment & Evaluation in Higher Education, Vol. 34, No. 1, February 2009, 79–89. Carless, D.(2015). Excellence in University Assessment. London: Routledge. ISBN:9781317580737. Carless, D., Salter, D., Yang, M. & Lam, J. (2011). “Developing Sustainable Feedback Practices.” Studies in Higher Education 36 (4), pp. 395–407. Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer and co-assessment in higher education: A review. Studies in Higher Education, 24, pp.331-350. doi: 10.1080/03075079912331379935 Fletcher, R., Meyer, L., Anderson, H., Johnston, P., & Rees, M.(2012). Faculty and students conceptions of assessment in higher education. Higher Education, 64(1),119-133. DOI:10.1007/s10734-011-9484-1. Hair,J., Anderson,R., Tatham,R. e Black,W.(1998). Multivariate Data Analysis. Upper Sadle River, NJ: Prentice Hall. Hattie,J., &Timperley, H.(2007). The Power of Feedback, Review of Educational Research, Volume 77, issue 1, March 2007, 81-112,DOI:https://doi.org/10.3102/003465430298487. Huba, M.E.; & Freed, J.E. (2000). Learner-centered Assessment on College Campuses: Shifting the Focus from Teaching to Learning. Boston, MA: Allyn and Bacon. Kyaruzi, F., Strijbos, J.-W., Ufer, S., & Brown, G. T. L. (2018). Teacher AfL perceptions and feedback practices in mathematics education among secondary schools in Tanzania. Studies in Educational Evaluation, 59,1–9. DOI:10.1016/j.stueduc.2018.01.004 McDowell,L., Wakelin,D., Montgomery,C., & King, S. (2011). Does assessment for learning make a difference? The development of a questionnaire to explore the student response. Assessment & Evaluation in Higher Education, 36(7), pp.749-765 Myers, C. B.; & Myers, S. M.(2015). The use of learner-centered assessment practices in the United States: the influence of individual and institutional contexts. Studies in Higher Education, 40:10, 1904-1918, DOI: 10.1080/03075079.2014.914164. Pereira, D. R.(2016). Assessment in Higher Education and Quality of LEARNING: Perceptions, Practices and Implications. Tese de Doutoramento em Ciências da Educação.Universidade do Minho. http://hdl.handle.net/1822/43445. Prosser, M., & Trigwell,K.(1998). Understanding leaning and teaching: The experience in higher education. Buckingham: SHRE & Open University Press. Samuelowicz, K., & Bain, J. D.(2002) Identifying academics’ orientations to assessment practice. Higher Education, 43(2), 173-201.DOI:10.1023/A:1013796916022. Webber, K.(2012).The Use of Learner-Centered Assessment in US Colleges and Universities. Research in Higher Education 53(2),pp.201-228.
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.