Session Information
02 SES 16 B, Identifying Digital Competencies in Healthcare: A Methodological Challenge in Quantitative and Qualitative Research
Symposium
Contribution
In healthcare surveys addressing digital competencies, self-assessment questionnaires are often utilized due to their promise of efficient administration and evaluation (Allen & van der Velden, 2005). It is essential for researchers to use validated, reliable, and well-tested assessment instruments. Simultaneously, the duration of surveys plays a critical role in healthcare, as time resources in medical settings are limited (Konrad et al., 2010). Many meticulously developed assessment instruments for measuring digital competencies, such as the "Scale for the Assessment of Non-Experts’ AI Literacy" (SNAIL; Laupichler et al., 2023), consist of a large number of items. This often makes the use of such scales unattractive for researchers studying samples consisting of healthcare professionals. Based on the SNAIL scale, this study aims to demonstrate that a reduced number of items can often suffice to reliably measure a given construct. We re-analyzed two large SNAIL datasets (Laupichler et al., 2023; Laupichler et al., 2024) to determine whether a lower number of items still has acceptable internal consistency, expressed as Cronbach’s α. The two datasets were analyzed separately, comparing the α values of the original subscales (factors) with a total of 31 items to the α values obtained when the number of items was reduced. The four items with the highest factor loadings from the initial exploratory factor analysis (Laupichler et al., 2023) were selected for this reduction. In the original scale, subscale 1 had α values of .92 (Dataset 1) and .94 (Dataset 2), the subscale 2 had values of .92 and .89, and subscale 3 had values of .86 and .82, respectively. According to Streiner (2003), these values indicate very good internal consistency. However, values exceeding .9 suggest that subscale items might be redundant (Streiner, 2003; Hulin et al., 2001). When only the four items with the highest factor loadings were considered, α for the first subscale changed to .85 and .84, for subscale 2 to .84 and .79, and for subscale 3 to .79 and .76. These values can still be interpreted as good to very good (Streiner, 2003). The findings suggest that using 12 instead of 31 items may suffice to reliably measure the construct. Since Cronbach’s α has limitations (Agbo, 2010), decisions regarding the reduction of assessment instruments should also consider additional metrics and substantive considerations. Nevertheless, it is worth reflecting on whether existing high-quality assessment instruments for digital competencies in healthcare can be made even more efficient.
References
Agbo, A. A. (2010). Cronbach's α: Review of limitations and associated recommendations. Journal of Psychology in Africa, 20(2), 233-239. Allen, J. P., & van der Velden, R. K. W. (2005). The role of self-assessment in measuring skills. ROA. REFLEX Working Paper Series No. 2ROA External Reports Hulin, C., Netemeyer, R., & Cudeck, R. (2001). Can a reliability coefficient be too high? Journal of Consumer Psychology, 10(1-2), 55-58. Konrad, T. R., Link, C. L., Shackelton, R. J., Marceau, L. D., von Dem Knesebeck, O., Siegrist, J., ... & McKinlay, J. B. (2010). It's about time: physicians' perceptions of time constraints in primary-care medical practice in three national healthcare systems. Medical care, 48(2), 95-100. Laupichler, M. C., Aster, A., Haverkamp, N., & Raupach, T. (2023). Development of the “Scale for the assessment of non-experts’ AI literacy”–An exploratory factor analysis. Computers in Human Behavior Reports, 12, 100338. Laupichler, M. C., Aster, A., Meyerheim, M., Raupach, T., & Mergen, M. (2024). Medical students’ AI literacy and attitudes towards AI: a cross-sectional two-center study using pre-validated assessment instruments. BMC Medical Education, 24(1), 401. Streiner, D. L. (2003). Starting at the Beginning: An Introduction to Coefficient α and Internal Consistency, Journal of Personality Assessment, 80(1), S. 99-103
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.