Session Information
10 SES 11 D, Mentoring in Teacher Education
Paper Session
Contribution
Across the globe, mentoring is used in various school contexts for various purposes, being viewed as a key professional learning tool from initial teacher education (ITE) to senior leadership development. In the context of ITE, by providing preservice teachers (PSTs) with quality mentoring, the supervising teacher (ST) helps build the capability and resilience of aspiring teachers required to transition into the profession effectively. However, there is a lack of an instrument to measure the quality elements of a quality mentor. Indeed, a wide-ranging review by (Banville, 2002) on the role of supervising teachers (i.e. mentors) in the USA concluded that “there are neither clear descriptions of their role nor any standards for their functioning or preparation” (p.346), and internationally, conceptions of what constitutes a quality mentor remain confused. To our knowledge, no instrument currently exists to evaluate the standards for quality mentoring or, more specifically, a quality mentor of PSTs in Australia. Clearly, a gap exists in the extant research and knowledge of mentoring standards and, in particular, quality indicators. This study fills this gap by developing an instrument to measure mentors’ quality and then exploring preservice teachers’ perceptions toward their mentors’ quality and practices that the mentees believed would be required to assist their development as future teachers.
The framework for this study is based on our systematic literature review paper (Ellis et al., 2020), which attempted to develop a body of knowledge by identifying elements of quality PST mentoring. We have attempted to identify the indicators of this construct, drawing on 70 published articles from 2009 to 2019. Although our paper provides a good conceptualisation of an effective mentor, there is no empirical evidence that supports its dimensionality.
Building on our theoretical paper, we developed a tool that STs can use to identify their strengths and weaknesses in their role as mentors. We administered the tool with PSTs to gather empirical evidence for the existence of this construct's dimensionality. The empirical support would warrant the use of a reliable and valid tool to measure supervising teachers’ mentoring practices. The tool can be a significant input for providing professional development activities for STs to enhance their mentoring skills.
Method
We extended our theoretical paper (Ellis et al., 2020) and applied an empirical approach (Tabachnick & Fidell, 2007; Worthington & Wittaker, 2006) to scale development. The elements we identified in our previous work were used as indicators of the construct, and a six-point Likert scale (from strongly disagree to strongly agree) was used. The tool has Cronbach’s alpha equal to 0.91; discrimination indexes higher than the 0.40 threshold value (0.48 to 0.79); and all items have weighted mean fit square estimate values within the confidence interval and the residuals, given by the t-statistics are within the range of -2 to +2. These indexes support the reliability and item fit to measure the construct. We recruited PSTs who have undergone professional experience in schools with supervising teachers assigned to mentor them. We followed the recruitment procedure and informed consent as outlined in our Ethics Approval (Withheld for anonymity). Four hundred fifty-four PSTs (age range 19-29; 343 females) completed the survey. The data were randomly split into two data sets for exploratory factor analysis (EFA) and for subsequent confirmatory factor analysis (CFA). In EFA, data screening was conducted, and initial analysis for factorability was done using the conventional method, including item correlations, Kaiser-Meyer-Olkin of sampling adequacy, and Bartlett’s test for sphericity. In the determination of the final EFA model, we investigated the eigenvalues, scree plots, and cross-loadings, and we included only items with factor loadings greater than 0.30. In the CFA, the following fit indexes were used: root mean square error of approximation (RMSEA), Tucker-Lewis Index (TLI), comparative fit index (CFI), and standardised root mean square residual (SRMR). EFA was conducted using SPSS v26, while CFA was performed using Mplus software (Muthén & Muthén, 1998-2012). Data screening and EFA results The factorability of the items was determined after the data set was checked for possible outliers. Sixty-six items were positively correlated (0.34 to 0.91; p<.001), while the rest were negatively correlated. The Kaiser-Meyer-Olkin measure of sampling adequacy was above the threshold value of 0.60 (0.88), and Bartlett’s test of sphericity was significant (X2 (211) = 6372.48; df = 595; p<.0001). Third, the commonalities range between 0.44 to 0.70, which are all above 0.30. All these indexes support the factorability of the 70 items comprising the tool.
Expected Outcomes
Exploratory factor analysis using maximum likelihood and direct oblimin rotation initially extracted ten factors. However, the cumulative eigenvalues indicated that the 8-factor model accounts for 72.22% of the variance observed in the data set. However, an examination of the items comprising the 8th factor revealed that only two items had cross-loadings with other factors greater than 0.30. Each of these items was subsequently deleted. In the final EFA, there are only seven factors extracted with 38 items with factor loadings ranging from 0.42 to 0.72. The confirmatory factor analysis (CFA) results show that the model fit statistics of the seven-factor model were all satisfactory, i.e., within the conventional cut-off values, indicating a good model fit. The RMSEA was 0.03, which was below 0.05. The CFI (0.91) and TLI (0.93) were higher than 0.90, indicating a good model fit. Further, the unstandardised loadings were all significant, as supported by the absolute values of the ratios between estimates and their corresponding standard errors, which were all greater than 1.96. The factor loadings of the items to their corresponding factors were all substantial, ranging from 0.74 to 0.94. Similarly, the R2 values indicated that the variance explained in each factor ranged from 54.76% to 88.36%. Their associated factor accounted for more than 50% of the variances of all items. These results support the convergent validity of the tool. Further, the discriminant validity of the 7-factor model showed that the correlations amongst factors ranged from 0.21 to 0.40, which indicated good discriminant validity. The seven factors that define effective PSTs’ mentors include: 1) actively collaborating with the university, 2) establishing an effective relationship with PSTs, 3) providing direction and support, 4) sharing a progressive mindset, 5) sharing resources and experiences, 6) developing a disposition and professional knowledge in mentoring, and 7) facilitating PST’s learning.
References
Banville, D. (2002, July 16-19). Literature review of best practices of cooperating teachers in the USA Conference on Physical Education, Beijing, China. Ellis, N. J., Alonzo, D., & Nguyen, H. T. M. (2020). Elements of a quality pre-service teacher mentor: A literature review. Teaching and Teacher Education, 92, 103072. https://doi.org/https://doi.org/10.1016/j.tate.2020.103072 Muthén, L. K., & Muthén, B. O. (1998-2012). Mplus User’s Guide (7th ed.). Muthén & Muthén. Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics. Pearson Education Inc. Worthington, R. L., & Wittaker, T. A. (2006). Scale development research: A content analysis and recommendations for best practices. The Counseling Psychologist, 34, 806-838.
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.