Session Information
16 SES 07 B, Teachers' Knowledge and Beliefs / TPACK
Paper/Pecha Kucha Session
Contribution
Nowadays it is not sufficient if a teacher is only a good specialist of a subject or good in pedagogy. Effective use of digital technology as one important aspect of a teacher’s knowledge for the 21st century is also highlighted in several policy papers (Groff, 2013; Estonian Lifelong Learning Strategy 2020, 2014). Besides educating pupils, experienced teachers have to supervise pre-service teachers and/or are mentors for beginning teachers. It means, that they have to have broader knowledge in different fields of teachers’ work than pre-service teachers. However, many in-service teachers were taught in years when technological skills and integration of technology into learning environment were not emphasized and transferred. Therefore, it is important to compare the in-services teachers and pre-service teachers' knowledge in various fields, including technology and its integration.
Because in Estonia digital competences of teachers’ and students are highlighted (Estonian Lifelong Learning..., 2014) and TPACK is widely accepted and applied in several studies (e.g. Cengiz, 2015; Dong, Chai, Sang, Koh & Tsai, 2015), we turn in our study to TPACK framework. The framework is developed by Mishra and Kohler (2006)and it consists of seven parts. Three parts of this framework describe basic areas of teacher’s knowledge (Content, Technology, and Pedagogical knowledge) and four overlapping parts indicate integrations between these three areas (Mishra & Kohler, 2006):
Technological Content Knowledge (TCK) – knowledge of subject matter integrated with Technology;
Technological Pedagogical knowledge (TPK) – knowledge of using technology supporting teaching methods;
Pedagogical Content Knowledge (PCK) – knowledge of teaching methods in different subject contexts;
Technological Pedagogical Content Knowledge (TPACK) – knowledge of using technology to implement teaching methods in different subject contexts.
Previous studies have found out that both pre-service teachers’ (e.g Cengiz, 2015; Lin, Tsai, Chai, & Lee, 2013) and in-service teachers (e.g Koh, Chai, Hong, & Tsai, 2015; Lin et al., 2013) have rated all their perceptions significantly higher than neutral. Different results are found in different studies about the highest and the lowest ratings. In Öz (2015) study results indicated that in Turkey the highest mean scores were received by pre-service teachers for PK and TK, whereas the lowest mean scores were ascribed to TCK and PCK, but in Dong et al. (2015) study found out that Chinese pre-service teachers perceived themselves strongest in term of their TPK and weakest in terms of their CK. It has found that in-service teachers rated the highest PK (Dong et al., 2015; Koh et al., 2015) and CK (Koh et al., 2015). The lowest rated factor by in-service teachers was TPACK (Dong et al., 2015; Koh et al., 2015). In a study conducted in Singapore, pre-service teachers evaluated the highest TK (Koh & Chai, 2014), but in-service teachers CK and PCK (Koh & Chai, 2014).
So far little research has been done on comparative studies of teachers and teacher education students (Dong et al., 2015). However, these few comparative studies have yielded somewhat controversial results. There were no significant differences found between Turkish teacher and student evaluations in any area of TPACK including technology (Saltan & Arslan, 2017), but in China, teacher education students provided significantly lower ratings than teachers in all of the seven areas of the TPACK framework (Dong et al. 2015).
Therefore the aim of this study was to describe and compare the pre-service and in-service teachers’ evaluations on their content, pedagogical and technological knowledge according to TPACK framework. Two research questions were posed: (1) What are pre-service and in-service teachers’ evaluations of their knowledge areas? (2) What are the differences between pre-service and in-service teachers’ estimates of their knowledge in different areas?
Method
Data were gathered from two sub-samples. The first were pre-service teachers from University of Tartu, who took the course ‘Designing Learning and Instruction’ in fall 2014 or 2015. Researchers emphasized that participation is voluntary and this research is not related to the course assessments. The sub-sample size was 206 respondents. Respondents were from different curricula and both from the first and second year of teacher education programmes. The second subsample was formed by in-service teachers, who have worked at least one year at a school. 256 in-service teachers from different schools in Estonia answered on a questionnaire. The respondents were from different subject domains and taught in different school levels. The total sample size was 462 respondents. Data were collected by questionnaire based on TPACK framework (Mishra & Kohler, 2006) and previous studies (Graham et al., 2009; Schmidt et al., 2009; Shih & Chuang, 2013). The questionnaire was developed by Estonian and Finnish researchers and consisted of 51 items. A 5-point Likert-type scale was used: (1) Strongly disagree; (2) Disagree; (3) Neither Agree Nor Disagree; (4) Agree; and (5) Strongly Agree. For more information about the development of the questionnaire read Luik, Taimalu, Suviste (2017). The TPACK scale corresponding to the Estonian context was divided into seven factors based on the theoretical model. The questionnaire was piloted with both 23 pre-service and 78 in-service teachers. In pilot study respondents wrote comments, how they understand each item. Items, which were misunderstood, were corrected. The questionnaire ended with background information. Statistical analyses were carried out using SPSS for Windows version 24.0. Then mean factor scores as means of the items belonging in particular construct according the theoretical framework for each participants were calculated. Mauchly's Test of Sphericity was used to indicate the assumption of sphericity. Linear mixed models were used. Multivariate analyses of variance with the Bonferroni adjustment for multiple comparisons was used to identify differences between the constructs in the case of pre-service and in the case of in-service teachers (the first research question). Multivariate test between-subjects effects with Bonferroni adjustment to counteract the problem of multiple comparisons was used for identifying differences in constructs between pre-service and in the case of in-service teachers (the second research question).
Expected Outcomes
The first research question was: What are pre-service and in-service teachers’ evaluations of their knowledge areas? The ranking order of the constructs was different in two subsamples. In the case of pre-service teachers, a repeated measures ANOVA determined that there statistically significant differences between the theoretical constructs (p< .001). Post hoc tests using the Bonferroni correction revealed that construct TCK was slightly different from construct TK, but it was not statistically significant (p = .093). However, construct TCK was statistically significantly different from all other five constructs (in all cases p< .05). Construct PK indicated significantly lower evaluations compared with all other constructs (in all cases p< .001). In the case of in-service teachers statistically, significant differences between the theoretical constructs were also found (p< .001) and the ranking order was different as in the case of pre-service teachers. Post hoc tests with the Bonferroni correction indicated that construct PK, which was on the last position in the case of pre-service teachers, was statistically significantly highly evaluated as all other constructs (in all cases p< .01). Construct TK was statistically lower evaluated as all other constructs (p< .01) except TPK, which did not indicate the statistically significant difference (p = 1.000). Answering on the second research question, what are the differences between pre-service and in-service teachers’ estimates of their knowledge in different areas, five statistically significant differences were found among seven constructs. In-service teachers evaluated constructs PK, CK and PCK significantly higher comparing with pre-service teachers (in all cases p< .001). However, pre-service teachers evaluated constructs TPK and TPACK higher than in-service teachers (in both cases p< .05). Surprisingly, there was not statistically significant difference between pre-service and in-service teachers evaluations on TK and TCK (in both cases p> .05).
References
Cengiz, C. (2015). The development of TPACK, Technology Integrated Self-Efficacy and Instructional Technology Outcome Expectations of pre-service physical education teachers. Asia-Pacific Journal of Teacher Education, 43(5), 411–422. Dong, Y., Chai, C. S., Sang, G.-Y., Koh, H. L., & Tsai, C.-C. (2015). Exploring the Profiles and Interplays of Pre-service and Inservice Teachers’ Technological Pedagogical Content Knowledge (TPACK) in China. Educational Technology & Society, 18(1), 158–169. Estonian Lifelong Learning Strategy 2020 (2014). Available: https://www.hm.ee/sites/default/files/estonian_lifelong_strategy.pdf [Accessed 12. January 2018]. Graham, C. R., Borup, J., & Smith, N. B. (2012) Using TPACK as a framework to understand teacher candidates’ technology integration decisions. Journal of Computer Assisted Learning, 28, 530–546. Groff, J. (2013). Technology-rich innovative learning environments. OECD CERI Innovative Learning Environments Project. Available: http://www.oecd.org/edu/ceri/Technology-Rich%20Innovative%20Learning%20Environments%20by%20Jennifer%20Groff.pdf [Accessed 12. January 2019]. Koh, J. H. L., & Chai, C. S. (2014). Teacher cluster and their perceptions of technological pedagogical content knowledge (TPACK) development through ICT lesson design. Computers & Education, 70, 222-232. Koh, J. H. L., Chai, C. S., Hong, H.-Y., & Tsai, C. C. (2015). A survey to examine teachers’ perceptions of design dispositions, lesson design practices, and their relationships with technological pedagogical content knowledge (TPACK), Asia-Pacific Journal of Teacher Education, 43(5), 378–391. doi: 10.1080/1359866X.2014.941280 Lin, T.-C., Tsai, C.-C., Chai, C. S., & Lee, M.-H. (2013). Identifying science teachers' perceptions of technological, pedagogical, and content knowledge (TPACK). Journal of Science Education and Technology, 22, 325–336. Mishra, P., & Koehler, M. J. (2006). Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge. Teachers College Record, 108(6), 1017–1054. Öz, H. (2015). Assessing Pre-service English as a Foreign Language Teachers’ Technological Pedagogical Content Knowledge. International Education Studies, 8(5), 119–130. Saltan, F., & Arslan, K. (2017). A comparison of in-service and pre-service teachers’ technological pedagogical content knowledge self-confidence. Cogent Education, 4. doi: http://dx.doi.org/10.1080/2331186X.2017.1311501 Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technology Pedagogical Content Knowledge (TPACK): The development and validation of an assessment instrument for preservice teachers. Journal of Research on Technology in Education, 42(2), 123–149. Shih, C.-L. & Chuang, H.-H. (2013). The development and validation of an instrument for assessing college students’ perceptions of faculty knowledge in technology-supported class environments, Computers & Education, 63, 109–118.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.