Session Information
Contribution
Validity is understood as the degree to which a measurement reflects a trait, aspect, or characteristic of that which is being measured (Bisquerra, 1989). Validity cannot be presented in an abstract way, but rather with regard to a specific context in which it is applied. Furthermore, precise and accurate measurement taking does not guarantee that these measurements are correct; in other words, it does not guarantee that the phenomenon an author has set out to measure has indeed been measured (Del Rincón, Arnal, Latorre & Sans, 1995).
For Robles Garrote and Rojas (2015), validity can refer to content or construct, both of which are related, understanding content validity to refer to when the elements chosen are indicators of the phenomenon the author has set out to measure, and construct validity to refer to when the resulting measurements of content validity can be considered relevant to the phenomenon the author has set out to measure, which requires the prior definition of the construct.
There are various techniques that can be used to evaluate the validity of a model, a measurement tool or a study. One of the most frequently used approaches is to elicit the opinion of experts to assess said validity (Gil-Gómez de Liaño & Pascal Ezama, 2012).
The use of expert panels allows authors to elicit the opinions of people whose training or professional career reflects that they are capable of giving evidence and critical assessment regarding the subject matter at hand (Escobar-Pérez & Cuervo-Martínez, 2008) seeking rational consensus (Cooke & Goossens, 2008) and bestowing validity on the content studied (Del Rincón, Arnal, Latorre & Sans, 1995; Jiménez, Salazar & Morera, 2013, and Robles & Rojas, 2015).
De Arquer (1996) distinguishes between four methods for eliciting expert opinion:
- Individual aggregate method: this entails eliciting the individual opinion of each expert. It is an economic method, since it does not require participants to meet, and although this might seem to be a disadvantage, since they are unable to exchange opinions, this becomes one of its advantages, since it inhibits bias, for example derived from peer pressure exerted by other experts.
- Delphi method: although initially individual anonymous responses are given, scores are then grouped and sent to the experts, so that in each of the iterations, experts can interact with previous results since they are questioned again regarding the responses they gave and those given by the rest of the group.
- Nominal group method: the panel members are first questioned individually and anonymously, and then subsequently they share scores and considerations with the rest of the group. From then on, a structured debate is generated, concluding with an individual score.
- Group consensus method: an exchange of opinions is sought. If consensus is not reached naturally, individual opinions can be ascertained and then subsequently summarised.
Based on these premises, the aim of this research is to evaluate the competency profile of the social educator, designed on the basis of a comparative document analysis of the 38 curricula in place at the Spanish universities that teach this course, using an expert panel for the purposes of content validation. Hence the panel of experts is asked to assess the relevance and clarity of each of the competencies proposed with regard to the dimension into which they are incorporated.
Method
Expected Outcomes
References
Bisquerra, R. (1989). Métodos de investigación educativa. Guía práctica. Barcelona: CEAC. Cabero Almenara, J. & Llorente Cejudo, M. C. (2013). La aplicación del juicio de experto como técnica de evaluación de las tecnologías de la información (TIC). Revista de Tecnología de Información y Comunicación en Educación, 7(2), 11-22. Cooke, R. M., & Goossens, L. L. H. J. (2008). TU Delft expert judgment data base. Reliability Engineering and System Safety, 93(5), 657-674. De Arquer, M.I. (1996). Fiabilidad humana: métodos de cuantificación, juicio de expertos. Madrid: Ministerio de Trabajo y Asuntos Sociales. Retrieved from http://www.insht.es/InshtWeb/Contenidos/Documentacion/FichasTecnicas/NTP/Ficheros/401a500/ntp_401.pdf. Del Rincón, D., Arnal, J., Latorre, A. & Sans, A. (1995). Técnicas de investigación en ciencias sociales. Madrid: Dykinson. Escobar-Pérez, J. & Cuervo-Martínez, A. (2008). Validez de contenido y juicio de expertos: una aproximación a su utilización. Avances en Medición, 6, 27-36. García, I. & Fernández, S. (2008). Procedimiento de aplicación del trabajo creativo en grupo de expertos. Energética, XXIX, 2, 46-50. Gil-Gómez de Liaño, B. & Pascal Ezama, D. (2012). La metodología Delphi como técnica de estudio de la validez de contenido. Anales de psicología, 28(3), 1011-1020. Robles Garrote, P. & Rojas, M. D. C. (2015). La validación por juicio de expertos: dos investigaciones cualitativas en Lingüística aplicada. Revista Nebrija de Lingüística Aplicada, 18, 1-16. Rodríguez Sedano, A. (2006). Hacia una fundamentación epistemológica de la pedagogía social. Educación y Educadores, 9(2), 131-147. Sáez Carreras, J. & Campillo Díaz, M. (2013). La Pedagogía Social como comunidad disciplinar: entre la profesionalización y desprofesionalización del campo. Educación Siglo XXI, 31(2), 73-96. Skjong, R. & Wentworth, B. (2000). Expert Judgement and risk perception. Retrieved from http://research.dnv.com/skj/Papers/SkjWen.pdf
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.