Session Information
Paper Session
Contribution
Training complex competences has become one of the major challenges in education (UNESCO, 2016). Despite the crucial use of practicum stages to develop these competences, there is a gap in higher education studies between the university teaching context and the professional practice. To overcome this gap, the courses are usually divided between theory and practice, but the content of the practices does not always have high similarity with the actual exercise of a profession (Grossman et al., 2009) and, therefore, some complex competences may not be easily trained.
To face this situation, in the last decades several disciplines (medicine, teacher training, engineering…) have implemented training through simulation (Chernikova, Heitzmann, Stadler, et al., 2020). Simulation can be defined as a simplified situation, where we mimic an interactive episode of real professional practice (Cook, Hamstra, et al., 2013); in other words, we represent a situation similar to those that our students will be facing in their future work.
As well as representing the situation, the other crucial component of simulation is the use of a structured and reflexive dialogue after the representation called debriefing (Decker et al., 2013). This debriefing stage can be organized differently, but it always should let the students talk about the experience and the learning goals of the simulation in a supportive climate (Fanning & Gaba, 2007).
Simulation has several advantages to train complex competences. First, students can face a challenging problem where their actions and decisions change the course of the events (Heitzmann et al., 2019). Second, they allow a progressive and guided experience to professional practice, permitting the decomposition of the core components of a professional situation (Grossman et al., 2009). Third, the close presence of the teacher allows the use of several feedback and scaffolding strategies (Chernikova, Heitzmann, Fink, et al., 2020). Fourth, simulation creates safe practice environments where mistakes are not irreversible (Cook, Brydges, et al., 2013). Finally, through simulation we can train the different components of professional practice: attitudes, conceptual knowledge, and procedural skills.
There is a lot of empirical evidence supporting the use of simulation to train complex competences (Chernikova, Heitzmann, Fink, et al., 2020; Chernikova, Heitzmann, Stadler, et al., 2020; Cook, Hamstra, et al., 2013). It has positive impact in learning and other important variables for the professional practice such as self-efficacy, i.e., confidence in your own skills to perform tasks (Gundel et al., 2019).
Considering the potential of simulation methodologies, we use it aiming to train communication and counselling skills with families. We designed two connected studies in education (study 1) and medicine (study 2) with two objectives. First, we test the impact of simulation through role playing to train communication and counselling skills with families in educational and health contexts. Therefore, we compare the effects of simulation conditions versus a lecture condition in three variables: conceptual knowledge about the topic, self-efficacy to communicate with families, and attitudes towards families. Second, we test three variants of the debriefing closing stage, asking students about ‘take-home messages’ in written format versus sharing them out loud versus a combination of sharing them out loud plus a written reflection.
In both studies (in education and medicine) we used a mixed method quasi-experimental design with pre-post measurements and quasi-control groups, collecting both quantitative and qualitative data. In this communication we will present only the quantitative data. While data for study 1 (education) has already been collected, we will collect data for study 2 (medicine) during February and March.
Method
- Participants In the study 1 (Education) a total of 84 students answered the pre-post questionnaires (64 female, 20 male). From the whole sample, 35 belonged to the control group, 15 were placed in the written condition, 12 in the sharing out loud condition, and 22 in the combination. In study 2 (medicine) we expect to reach around 100 participants for intervention and control groups. - Instruments Conceptual knowledge test: for each study we elaborated a 10 questions multiple-choice test with 3 possible answers. In both studies the theoretical contents had to do with giving bad news to families, in education regarding a bullying case, and in medicine regarding a medical diagnosis. Self-efficacy scale: this 10-points Likert scale measures how confident the participant felt to manage future communication situations with families. In study 1 it was adapted from the questionnaire used by De Coninck et al. (2020). In study 2 it was adapted from Doyle et al. (2011). Attitudes towards families scale: this 10-points Likert scale assesses the extent of the importance when communicating effectively with families. In study 1 we created an ad hoc scale. Attitudes towards health communication: this scale asked about attitudes towards communication with patients in a health context and is adapted from Escribano et al. (2021). - Procedure Participants were selected from courses taught by research team members. All students had to participate in the simulation as part of the practical lecture. However, only those that agreed to participate in the study answered the questionnaires before and after the simulation session and they were included in the debriefing recordings. Each group was randomly assigned to a different condition of intervention. The simulations took place in two-hour sessions. During these sessions, the instructor explained some theoretical contents. Later, in the briefing stage, the instructor explained some basic rules, and the objectives of the session. A participant of each group volunteered to represent the professional. Then, the student had to sit and wait for a family (actors) to knock the classroom door. Once the family entered in the classroom, the simulation started and lasted around 10-20 minutes. After that, students were asked to sit forming a circle and the debriefing started. In the intervention conditions, the debriefing structure was different only in the final stage (sharing versus writing vs combination). In the control group condition, students only had an interactive lecture about the contents of the unit.
Expected Outcomes
Regarding the study 1, participants in all intervention and control conditions improved their scores in the three variables (conceptual knowledge, self-efficacy and attitudes towards families). The control group has higher average scores in the three variables in the post measurements, which would not support the hypothesis that the interventions groups would get higher scores. However, in most pre-measurement comparisons this control group also got higher scores than the intervention groups (non-significant), except in the conceptual knowledge test, where the intervention group 3 had a significant higher score. Also, it is important to note that in the self-efficacy and the attitude scales the mean scores were already high in the pre-measurement (between 8 and 8.5 in self-efficacy; 8.2 to 8.9 in attitudes). The different intervention conditions showed similar improvements in the pre- and post-measurements. Considering the results, it seems that a single session simulation was effective in the short term in the three measured variables, but its effects did not outperform the interactive lecture. This might be attributed to the lack of differential effect of the simulation, but it can also be explained by some limitations. First, the instruments may not be able to discriminate properly the differences, especially the self-efficacy and the attitudes scales, that already reached high scores in the pre-measurement (ceiling effect). Second, the impossibility to create full experimental groups, assigning participants randomly to each condition, and the fact that around 40 students were not included in the results (whether because they did not agree to participate or because they did not answer the questionnaires in one of the two moments), make it difficult to discard some alternatives explanations. We expect the study in medicine, with some changes in the design, will help us overcome these limitations and get a clearer picture of the effects of the simulation.
References
Chernikova, O., Heitzmann, N., Fink, M. C., Timothy, V., Seidel, T., & Fischer, F. (2020). Facilitating Diagnostic Competences in Higher Education—a Meta-Analysis in Medical and Teacher Education. Educational Psychology Review, 32(1), 157–196. https://doi.org/10.1007/S10648-019-09492-2/TABLES/9 Chernikova, O., Heitzmann, N., Stadler, M., Holzberger, D., Seidel, T., & Fischer, F. (2020). Simulation-Based Learning in Higher Education: A Meta-Analysis: Review of Educational Research, 90(4), 499–541. https://doi.org/10.3102/0034654320933544 Cook, D. A., Brydges, R., Zendejas, B., Hamstra, S. J., & Hatala, R. (2013). Technology-enhanced simulation to assess health professionals: A systematic review of validity evidence, research methods, and reporting quality. Academic Medicine, 88(6), 872–883. https://doi.org/10.1097/ACM.0B013E31828FFDCF Cook, D. A., Hamstra, S. J., Brydges, R., Zendejas, B., Szostek, J. H., Wang, A. T., Erwin, P. J., & Hatala, R. (2013). Comparative effectiveness of instructional design features in simulation-based education: Systematic review and meta-analysis. Medical Teacher, 35(1), e867–e898. https://doi.org/10.3109/0142159X.2012.714886 de Coninck, K., Walker, J., Dotger, B., & Vanderlinde, R. (2020). Measuring student teachers’ self-efficacy beliefs about family-teacher communication: Scale construction and validation. Studies in Educational Evaluation, 64, 100820. https://doi.org/10.1016/J.STUEDUC.2019.100820 Decker, S., et al. (2013). Standards of Best Practice: Simulation Standard VI: The Debriefing Process. Clinical Simulation in Nursing, 9(6), S26–S29. https://doi.org/10.1016/J.ECNS.2013.04.008 Doyle, D., Copeland, H. L., Bush, D., Stein, L., & Thompson, S. (2011). A course for nurses to handle difficult communication situations. A randomized controlled trial of impact on self-efficacy and performance. Patient education and counseling, 82(1), 100–109. https://doi.org/10.1016/j.pec.2010.02.013 Escribano S, Juliá-Sanchis R, García-Sanjuán S, Congost-Maestre N, Cabañero-Martínez MJ. (2021). Psychometric properties of the Attitudes towards Medical Communication Scale in nursing students. Fanning, R. M., & Gaba, D. M. (2007). The role of debriefing in simulation-based learning. Simulation in Healthcare, 2(2), 115–125. https://doi.org/10.1097/SIH.0B013E3180315539 Grossman, P., Compton, C., Igra, D., Ronfeldt, M., Shahan, E., & Williamson, P. W. (2009). Teaching Practice: A Cross-Professional Perspective. Teachers College Record, 111(9), 2025–2100. https://doi.org/10.1177/016146810911100905 Gundel, E., Piro, J. S., Straub, C., & Smith, K. (2019). Self-Efficacy in Mixed Reality Simulations: Implications for Preservice Teacher Education. The Teacher Educator, 54(3), 244–269. https://doi.org/10.1080/08878730.2019.1591560 Heitzmann, N., Seidel, T., Opitz, A., Hetmanek, A., Wecker, C., Fischer, M., Ufer, S., Schmidmaier, R., Neuhaus, B., Siebeck, M., Stürmer, K., Obersteiner, A., Reiss, K., Girwidz, R., & Fischer, F. (2019). Facilitating Diagnostic Competences in Simulations in Higher Education. Frontline Learning Research, 7(4), 1–24. https://doi.org/10.14786/FLR.V7I4.384 UNESCO. (2016). Marco conceptual para la evaluación de competencias.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.