Background: Young people’s economic, cultural and social status (ECSS) is one of the most prevalent constructs used for studying equity of educational outcomes. National, regional and international large-scale assessments have furthered the quantitative research concerning the relationship between economic, cultural, and social background indicators and educational outcomes (Broer et al., 2019; Lietz et al., 2017; OECD, 2018).
However, there are observed theoretical and analytical limitations in the use of existing ECSS indicators from large-scale assessments for the purpose of monitoring equity in education (Osses et al., Forthcoming). Theoretical limitations relate to inconsistencies in how the ECSS construct is defined and operationalised, which pose significant challenges for comparing results between large-scale assessments and limit the usability of findings in addressing policy issues concerning equity in education. For example, Osses et al. (2022), demonstrated that using alternative approaches for constructing an ECSS indicator leads to different judgements concerning education systems in terms of equity of learning achievement.
Analytical limitations relate to the validity and reliability of ECSS indicators used in large-scale assessments. Whilst studies often explore reliability, cross-national invariance and other psychometric properties of ECSS indicators, information about the performance of alternative indicators is not provided. In fact no studies were found that compare the performance of alternative ECSS indicators constructed by large-scale assessments; Oakes and Rossi (2003) is an example from health research.
Objective: This paper focuses on analysing the properties of two ECSS indicators constructed using alternative theoretical and analytical approaches, applied to the same student sample. Evidence on validity is provided to evaluate the relative merits and the comparability of the two indicators for monitoring equity in education.
Method: This study analyses the properties of students’ ECSS indicators constructed by PISA and TIMSS with the aim of providing evidence concerning the validity and comparability of these two indicators. The novelty of the methodological approach lies in estimating both indicators for the same sample of students – those in PISA 2018, and thus analysing the merits of each analytical approach.
Indicators are analysed in terms of its content – ie, evaluating alignment between the theoretical construct, the indicators and the items chosen for its operationalisation – and its internal consistency. Indicators’ internal structure is investigated using confirmatory factor analysis and item response modelling in relation to model fit and the precision with which the indicators measure the ECSS construct – that is, targeting and reliability. The use of plausible values as a strategy to reduce error in making inferences about the population of interest is also explored.
Preliminary results show that the TIMSS-like indicator constructed using PISA 2018 data may benefit from better defining the underlying construct and of theoretical support to provide evidence for evaluating the adequacy of indicators chosen in its operationalisation. In terms of internal consistency, results indicate that items in the TIMSS-like indicator are “too easy” for the PISA population of interest and, although response data show a reasonably fit to the measurement model, the chosen items provide an imprecise measurement of students’ ECSS.
Three key conclusions emerge from preliminary results. First, large-scale assessments should devote more time to clearly define and provide theoretical support for the construct of students’ ECSS. Second, items used in summary indicators of ECSS should be carefully inspected, not only in terms of their reliability but also in terms of the adequacy of response categories and fit to measurement model. Third, the use of plausible values should be considered in order to avoid bias and improve precision of population estimates. The PISA indicator is currently being analysed.