Session Information
16 ONLINE 25 A, Educational Technology and Teacher Competences
Paper Session
MeetingID: 916 5242 0160 Code: tt4Xfj
Contribution
The accelerating rise and widespread adaptation of digital technology in private and business sectors has led to an international consensus in regards to the necessity about the regular integration of technology in educational settings in order to enhance learning in general and prepare students for a competent use of digital technology (Peña-López, 2015). Due to the ongoing COVID-19 pandemic, the process has been accelerated even faster (Helm, Huber & Loisinger, 2021). However, meta-analyses revealed that the mere integration of technology by teachers is not sufficient for those purposes (Baker, Goodboy, Bowman & Wright, 2018). Ergo, technological knowledge and competence is a premise for a sufficient implementation for effective learning. This was also addressed by the European Commission when they published the European Framework for Digital Competence of Educators (DigCompEdu; Redecker, 2017). To date, only one self-report measure is available to evaluate respective constructs (Caena & Redecker, 2019).
Based on Shulman’s taxonomy of pedagogical knowledge (PK), content knowledge (CK) and pedagogical content knowledge (PCK), a conceptual framework for the integration of technological knowledge into teaching practices was introduced by Mishra & Koehler. To keep up with the demands of digital learning, they added technological knowledge (TK), technological content knowledge (TCK), technological pedagogical knowledge (TPK) and the overarching technological pedagogical content knowledge (TPACK) to Shulman’s taxonomy. While TK is the fundament for any technological application, TPK is regarded as a central dimension denoting teachers’ domain-unspecific knowledge about the effective integration of educational technology (Lachner, Backfisch & Stürmer, 2019).
In past research, TPACK has been assessed by either self-reports or test-based assessments. By nature, self-reports are economic and easy to administer, but their validity has been questioned for a variety of reasons, including that they rather reflect self-efficacy beliefs than actual competence (Scherer, Tondeur & Siddiq, 2017). To date, little research on TPACK focusses on the relationship between self-reported vs. test-based assessments.
Akyuz (2018) compared outcomes test-based to self-ratings and found that CK, TK, PCK and TCK were closely intercorrelated whereas PK, TPK and TPACK were further apart. Research on the magnitude of correlations between self-reports and test outcomes is inconsistent to date (e.g., Drummond & Sweeney, 2017; König, Kaiser & Felbrich, 2012).
Research on the causal predominance of test-based vs. self-reported measures bears structural similarity to a body of research that addressed the relationship between subjective ratings on Academic Self Concept (ASC) and objective assessments on achievement. In this line of research, three competing models have been discussed (Schneider & Ludwig, 2012):
(1) Skill-Development-Model (SDM): Test-based assessment outcomes predict subjective self-reports
(2) Self-Enhancement-Model (SEM): Subjective self-reports predict test-based assessment outcomes
(3) Reciprocal-Effects-Model (REM): Both aforementioned effects take place simultaneously
In ASC research, a recent meta-analysis provides strong support for the REM (Wu, Guo, Yang, Zhao & Guo, 2021). Cross-lagged modelling reveals that both, the SDM path and SEM path reach statistical significance (β = .08; p< .01 and β = .16; p< .01 respectively). Given this structural similarity, there is reason to expect a similar pattern of effects in the relationship of digitalization-related test-based assessments and self-reports referring to TK and TPK.
In conclusion, current research concerning the validity of self-reports and their relationship to test-based assessments deserves further attention in order to utilize them adequately for the facilitation of TPACK. Thus, we investigated the following research questions:
(RQ 1) Does test-based and self-reported knowledge on the TK and TPK domains develop over the course of teacher candidates’ bachelor studies?
(RQ 2) Is there mutual influence between test-based assessments and self-reports of TK and TPK in a longitudinal view, providing support for either SDM, SEM or REM?
Method
In a cohort study design, a sample of teacher candidates enrolled in a teacher education program in Rhineland-Palatinate (Germany) is monitored. To date, preliminary data of N = 148 participants are available, encompassing two surveys six months apart. In ongoing data collection, these subjects will participate in a third survey. Hence, analysis based on three surveys will be presented at the conference. To measure self-reported technological knowledge (TK) and technological pedagogical knowledge (TPK), subscales of the TPACK.xs were used (Schmid, Brianza & Petko, 2020). The instrument is based on validated self-reported measures (e.g., Schmidt, Baran, Thompson, Mishra, Koehler & Shin, 2009). To measure test-based TK and TPK, instruments from the German M3K Project were used (Herzig, Schaper, Martin & Ossenschmidt, 2015). The instruments are based on research from the German ‘media pedagogy’ context (e.g., Blömeke, 2000). Two sub facets of media pedagogy are ‘media didactics’ and ‘technological knowledge’. The former facet is closely related to TPK while the latter corresponds to TK (Herzig et al., 2015). In the preliminary analysis on the currently available data, paired t-tests were conducted to investigate shifts in dependent variables between the two surveys. For the analysis of the longitudinal relationship between self-reported and test-based knowledge, the data were submitted to cross-lag analysis on a manifest level by means of the lavaan package in R. It was hypothesized that (1) there is a rise in self-assessed and test-based TK and TPK throughout the course of studies; (2) self-reported and test-based assessments predict each other.
Expected Outcomes
Contrary to expectations, neither TK nor TPK show significant changes over time in the preliminary analyses. Although this might in part be explained by the relatively short time interval in between the two surveys, it points towards an urgency to optimize the curricula to facilitate the acquisition of TK and TPK in teacher education coursework. The aforementioned third survey will be subject to more sophisticated analyses in order to provide more clarity on the RQ. As for the longitudinal relationship between self-reports and test outcomes, data provided support for the SEM mechanism in the TK but not in the TPK domain. In TK, the self-reports predicted the test-based assessment (β = .48; p = .038) but not vice versa. This might imply that teacher educators should structure their programs in line with SEM for the teaching and learning on TK, for example by emphasizing positive feedback or providing successful role models. The TPK domain showed no predictive relationships between self-reported and test-based assessments. Therefore, none of the abovementioned models of causal predominance of either self-reports or test outcomes seem to be applicable for TPK. However, this could be due to a lack of development as indicated by the paired t-tests or the manifest cross-lagged-modelling. After considering the additional survey and latent modelling, these findings may be subject to change.
References
Akyuz, D. (2018). Measuring technological pedagogical content knowledge (TPACK) through performance assessment. Computers and Education, 125 (May 2017), 212–225. Baker, J. P., Goodboy, A. K., Bowman, N. D., & Wright, A. A. (2018). Does teaching with PowerPoint increase students’ learning? A meta-analysis. Computers and Education, 126, 376–387. Blömeke, S. (2000): Medienpädagogische Kompetenz. München: koepaed. Caena, F., & Redecker, C. (2019). Aligning teacher competence frameworks to 21st century challenges: The case for the European Digital Competence Framework for Educators (Digcompedu). European Journal of Education, 54(3), 356–369. Drummond, A., & Sweeney, T. (2017). Can an objective measure of technological pedagogical content knowledge (TPACK) supplement existing TPACK measures? British Journal of Educational Technology, 48(4), 928–939. Herzig, B., Schaper, N., Martin, A., & Ossenschmidt, D. (2016). Schlussbericht Zum BMBF Verbundprojekt M3K. Paderborn, Germany: Universität Paderborn. Helm, C., Huber, S., & Loisinger, T. (2021). Meta-Review on findings about teaching and learning in distance education during the Corona pandemic—evidence from Germany, Austria and Switzerland. Zeitschrift für Erziehungswissenschaft, 24(2), 237–311. Lachner, A., Backfisch, I., & Stürmer, K. (2019). A test-based approach of Modeling and Measuring Technological Pedagogical Knowledge. Computers and Education, 142(August). Peña-López, I. (2015). Students, computers and learning: Making the connection. Paris, France: OECD Publishing. Redecker, C. (2017). European framework for the digital competence of educators: DigCompEdu (No. JRC107466). Joint Research Centre (Seville site). Retrieved from: https://ec.europa.eu/jrc/en/digcompedu Scherer, R., Tondeur, J., & Siddiq, F. (2017). On the quest for validity: Testing the factor structure and measurement invariance of the technology-dimensions in the Technological, Pedagogical, and Content Knowledge (TPACK) model. Computers and Education, 112, 1–17. Schmid, M., Brianza, E., & Petko, D. (2020). Developing a short assessment instrument for Technological Pedagogical Content Knowledge (TPACK. xs) and comparing the factor structure of an integrative and a transformative model. Computers & Education, 157, 103967. Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological pedagogical content knowledge (TPACK). Journal of Research on Technology in Education, 42(4), 123–149. Schneider, C. & Ludwig, P. H. (2012). Auswirkungen von Maßnahmen der inneren Leistungs-differenzierung auf Schulleistung und Fähigkeitsselbstkonzept im Vergleich zur äußeren Differenzierung. In: T. Bohl, M. Bönsch, M. Trautmann & B. Wischer (Hrsg.), Binnendifferenzierung. Teil 1: Didaktische Grundlagen und Forschungsergebnisse zur Binnendifferenzierung im Unterricht (S. 72 – 106). Immenhausen bei Kassel: Prolog-Verlag.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.