10 SES 11 C, Research on Professional Knowledge & Identity in Teacher Education
The strong focus of professional learning and practice on teacher assessment literacy (AL) is supported by its strong theoretical and empirical evidence (Black & Wiliam, 1999; Hattie, 2008; Popham, 2009; Wiliam, 2017). However, even with the advanced conceptualisations of this construct, teachers’ assessment practices remain relatively low (Davison & Michell, 2014). This is a similar case found in the Philippine educational system where the current assessment reform has not gained significant traction in raising teacher assessment literacy. The Department of Education (DepEd) Order No. 8, series of 2015 otherwise known as the “Policy Guidelines on Classroom Assessment for the K to 12 Basic Education Program” specifies the guides for implementing the principles and practices of effective assessment. However, despite significant efforts to shift teachers’ examination-driven practices to a more student-centred approach using assessment to effectively support students in their learning, there is little evidence to support the effectiveness of the reform.
One of the most cited and well-argued factors that hinders teachers’ ability to change their practices is the failure to change their belief system (Fives & Buehl, 2016). Research into teacher beliefs about assessment shows that their alignment to the philosophy of effective practices frame and guide teachers’ adoption and implementation of effective assessment practices (Looney, Cumming, van Der Kleij, & Harris, 2017). Hence, it is imperative that to change teachers’ assessment practices, there is a need to explore their beliefs held and find ways to align these beliefs to the aims of a particular assessment reform.
Researchers argue that an assessment literacy program aimed at supporting teachers to enhance their assessment literacy should begin with measuring their beliefs about assessment (Brown, 2004)and develop professional programs to change these beliefs. There have been several tools developed to measure teachers’ beliefs about assessment(Loc, 2016)and conception of assessment (Bonner, 2016; Brown, 2004), but these were developed for specific context. To effectively support the implementation of the DepEd Order No. 8, series of 2015 of the Philippine educational system, there is a need to develop a more context-driven tool to measure Philippine teachers’ assessment beliefs. This is due to the fact that teachers from different contexts hold different views on assessment (Brown, Hui, Yu, & Kennedy, 2011; Dayal & Lingam, 2015).
Drawing upon the principles of effective assessment indicated in the DepEd Order No. 8, series of 2015 of the Philippines, curriculum innovation and change, and philosophical framework of assessment for learning (AfL), we explored the dimensions of teacher assessment belief through developing a tool, which would be used to guide the development and implementation of a more effective assessment literacy program. Theoretical and empirical approaches were combined, using several rounds of experts’ validation and exploratory and confirmatory factor analyses. Factor analyses extracted nine dimensions that describe teachers’ assessment beliefs: Factor 1– assessment for professional learning, Factor 2– assessment for motivation, Factor 3– assessment for measurement, Factor 4– assessment for planning, Factor 5– assessment for engagement, Factor 6– assessment for learning, Factor 7– assessment for evaluation, Factor 8– assessment for norm-referencingand Factor 9– assessment for instructional accountability. Subsequent second-order factor analysis generated two higher-order factors: Factors 1, 4 and 9 loaded to G1 assessment for teacher development and Factors 2, 3, 5, 6, 7 and 8 into another second-order factors, G2assessment for studentlearning. Findings of this study present a new way to conceptualise teacher assessment beliefs in a particular context. An on-going study is being conducted to establish the validity evidence of the tool in terms of changing teachers’ beliefs and the consequences of these changes in teachers’ practices and ultimately its impact to student learning.
We used a two-stage approach in developing the tool to measure teacher beliefs about assessment aligned to the DepEd Order No. 8, series of 2015 and underpinned by the philosophical framework of AfL. In the first stage, we used the theoretical approach (Bryman, 2016) and engaged 10 teachers, 3 principals and two education supervisors in focus groups to explore their perceptions in relation to the purposes and functions of assessment. The tool underwent a series of validation with five teachers and two assessment experts. In the second stage, the tool was pilot tested with 38 teachers. Results of the pilot test using the general partial credit analysis of the Rasch model using Conquest software showed that the tool has a high Cronbach’s alpha (0.96), discrimination indexes range from 0.45 to 0.72, all mean fit square value is close to 1 and within the confidence interval and there was no evidence of category disordering. These results initially suggested that the tool has good psychometric properties. To establish the empirical support for the construct, we recruited teachers and 568 participated in the survey: 142 males and 408 from public schools. We randomly split the sample to comprise two data sets: one for exploratory factor analysis (EFA) and one for confirmatory factor analysis (CFA). We used the maximum likelihood with direct oblimin for EFA using SPSSv24. Initial screening of the data was performed to check for factorability. We checked the item correlations, Kaiser-Meyer-Olkin of sampling adequacy, Bartlett’s test for sphericity. In the actual EFA, we checked the results for eigenvalues, scree plot, cross loadings and we included only items with factor loadings greater than 0.30. The CFA was conducted using the Mplus Software v7 (Muthén & Muthén, 1998-2012). We used the conventional threshold values (Kline, 2010; Marsh et al., 2009; Tabachnick & Fidell, 2007) of the following indexes: root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker-Lewis index (TLI) and standardised root mean square residual (SRMR).
One significant contribution of our study is the explicit inclusion of dimensions of teacher beliefs about assessment to enhance and sustain student engagement and motivation. The inclusion of these dimensions highlighted the long-argued role of assessment in improving student engagement and motivation (Bevitt, 2015; Dweck, 2007). In addition, a dimension on teachers’ belief that assessment results can be used to identify teachers’ professional learning needs is included. This function of assessment has been widely researched by Timperley, Wilson, Barrar, and Fung (2008). Furthermore, there are three dimensions that relate to the high-stake functions of assessment. These include teachers’ beliefs of assessment for measurement, for evaluation of student performance, and for norm-referencing. More generally, some of the dimensions extracted are consistent with the previous studies. For example, the dimension on perceptions of teachers of assessment for learning purposes, for planning, instructional accountability, for measurement lend strong support from the dimension on assessment for improvement of student learning, improvement of teaching, school accountability, and assessment as a valid measure respectively as described by Brown (2004). Another contribution of our study is the extraction of two general higher-order factors, which were initially highlighted in the study of Brown (2004). This second-order factor relates to teachers’ beliefs about assessment that influence student learning. In a broader sense, the second-order factors categorise teachers’ beliefs into two: for the improvement of their practices and for the improvement of student learning. Overall, the alignment between the theoretical framework of teachers’ belief in assessment and its empirical support for its dimensionality supports the conceptualisation of this construct with nine factors and two higher-order factors. Although we do not intend to develop a universal tool for all teachers across different contexts, it would be worthwhile to explore the measurement invariance of the tool.
Bevitt, S. (2015). Assessment innovation and student experience: a new assessment challenge and call for a multi-perspective approach to assessment research. Assessment & Evaluation in Higher Education, 40(1) Black, P., & Wiliam, D. (1999). Assessment for learning: Beyond the black box. Retrieved from Cambridge Bonner, S. (2016). Teacher perceptions of assessment: Competing narratives. In G. T. L. Brown & L. Harris (Eds.), Handbook of Social Conditions in Assessment. New York: Routledge. Brown, G. T. L. (2004). Teachers' conceptions of assessment: implications for policy and professional development. Assessment in Education: Principles, Policy & Practice, 11(3), 301-318. Brown, G. T. L., Hui, S. K. F., Yu, F. W. M., & Kennedy, K. J. (2011). Teachers’ conceptions of assessment in Chinese contexts: A tripartite model of accountability, improvement, and irrelevance. International Journal of Educational Research, 50(5), 307-320. Davison, C., & Michell, M. (2014). EAL assessment: What do Australian teachers want? TESOL in Context, 24(2), 51-72. Dweck, C. (2007). Mindset: The new psychology of success. New York: Ballantine Books. Fives, H., & Buehl, M. M. (2016). Teachers’ beliefs, in the context of policy reform. Policy Insights from the Behavioral and Brain Sciences, 3(1), 114-121. Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement: Routledge, Hoboken. Kline, R. B. (2010). Principles and practices of structural equation modeling (3rd ed.). New York, NY: Guilford Press. Loc, N. T. H. (2016). Development of Vietnamese pre-service EFL teachers’ assessment literacy. (Doctor of Philosophy), Victoria University of Wellington, New Zealand. Looney, A., Cumming, J., van Der Kleij, F., & Harris, K. (2017). Reconceptualising the role of teachers as assessors: teacher assessment identity. Assessment in Education: Principles, Policy & Practice, 1-26. Marsh, H. W., Muthén, B. O., Asparouhov, T., Lüdtke, O., Robitzsch, A., Morin, A. J. S., & Trautwein, U. (2009). Exploratory structural equation modeling, integrating CFA and EFA: Application to students' evaluations of university teaching. Structural Equation Modeling: A Multidisciplinary Journal, 16(3), 439-476. Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory Into Practice, 48(1), 4-11. Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics. Boston: Pearson Education Inc. Timperley, H., Wilson, A., Barrar, H., & Fung, I. (2008). Teacher professional learning and development: Best evidence synthesis on professional learning and development, . Retrieved from http://www.oecd.org/education/school/48727127.pdf Wiliam, D. (2017). Assessment and learning: Some reflections. Assessment in Education: Principles, Policy & Practice, 24(3), 394-403.
The programme is updated regularly (each day in the morning)
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.