Session Information
04 ONLINE 20 B, Students with autism spectrum disorder: Expanding the conversation
Paper Session
MeetingID: 822 5444 2679 Code: y3UEev
Contribution
This study builds on some of the findings of a BERA-funded research project which investigated teachers’ use of data to inform emergency remote teaching (ERT) during COVID-19 restrictions (Chase et al., 2021). While teachers used data to inform their planning for remote learning, the use of standardised test data decreased (Chase et al., 2021). Large-scale, standardised assessments are used in many countries, where the data serves a range of purposes, including diagnostics, monitoring curriculum implementation for accountability purposes, and certification (Verger et al, 2019). From a classroom perspective, standardised assessments can be used to inform planning for learning and teaching. However, with any large-scale test, there are groups of students that may perform differently from the expected norms.
For students with autism, participation in large-scale assessments has been linked with the promotion of quality instruction, although there is a lack of clarity in terms of the accuracy to which such tests can measure their academic proficiency (Witmer & Roschmann, 2020). Many students with autism present with a range of diverse needs that can impact considerably on their ability to communicate what they know. These diversities in students with autism are well documented (e.g. American Psychiatric Association, 2013; Trammell et al., 2013; Matson, 2016) and raise questions about the level to which standardised tests are inclusive of students with autism. The extent to which a student with autism can access the concepts and communicate what they know in standardised tests has implications for teachers in relation to analysing, interpreting and using the test data, yet there is a paucity of research in this area.
The Australian Council for Educational Research (ACER) is a leading global educational research centre with expertise in large-scale, standardised test development. ACER’s Progressive Achievement Tests (PAT) are low-stakes, standardised tests that have been developed for Australian schools. The tests are designed around domain-specific scales and mapped to the curriculum. These enable teachers to track learning progress of individuals, groups and cohorts longitudinally throughout primary and secondary school. Teachers may also use the scale scores and accompanying level descriptions to interpret the learning and teaching needs of their students. ACER offers professional learning courses to support teachers and school leaders to analyse, interpret and use the data gathered from PAT more effectively for differentiation of learning and teaching.
In 2021, a group of schools specialising in educating students with autism participated in PAT assessments, although the administration of these assessments were impacted by a period of ERT in response to a COVID-19 outbreak. All teachers from these specialist schools engaged in online professional learning to understand and use PAT data to inform better planning for learning and teaching. The teachers observed a range of anomalous data from the students who completed the PAT, such as disparities between students’ achievement in the test and teacher observations in the classroom and inconsistent patterns of response. This raised questions about how teachers implement and use standardised tests to inform learning and teaching for their students with autism. This study sought to investigate the phenomenon further. The following research question guided the study:
What strengths, limitations, challenges and opportunities are associated with assessing and interpreting the learning progress of students with autism using a standardised testing tool?
Supporting questions were used to focus the scope of the study:
- To what extent do standardised test analysis and interpretation align with the teachers’ observations, beliefs and contextual knowledge of their students?
- To what extent are standardised tests inclusive of students with autism?
- What factors should be considered when analysing and interpreting standardised test data for students with autism?
Method
This qualitative study is a practice-based, critical reflection and analysis of the school Principal’s, Course Coordinator’s and Facilitator’s learning from the experience of interpreting standardised test (PAT) data to inform learning and teaching for students with autism. Qualitative data were collected via online communications and audio recorded critical reflection on practice by the authors. The learnings are analysed thematically (Braun & Clarke, 2006) using a Miles and Huberman-style matrix (Miles & Huberman, 1994). An interpretive approach (Cohen, et al. 2007; Keutel & Werner, 2011) is taken to understand the implications of the study and to compare findings with data collected from the BERA-funded study.
Expected Outcomes
Analysis of the authors’ critical reflections reveal strengths, limitations, opportunities and challenges associated with using standardised tests to inform learning and teaching for students with autism. A strength lies in the rigorous design of the tests to reduce bias and increase reliability and validity of the results. However, some students with autism struggled to access the test, placing limitations on the usability of their test results. This raises questions about how standardised testing could be made more inclusive for students with autism. Another limitation includes uncertainty about the level of accuracy of assessment data from students who completed the test at home during remote learning. This aligns with qualitative findings from the BERA study, indicating a possible factor in the overall reduction in standardised testing during COVID-19 remote teaching periods. Challenges arose when teachers noted discrepancies between test results and their observations of students in the classroom, highlighting the critical role of teacher professional judgement and knowledge of students when interpreting test data. There are opportunities to empower teachers to interpret standardised test data in light of their observations of their students and school contexts. This research has global implications due to the ubiquitous nature of standardised testing across school systems. It raises considerations regarding standardised testing of students with autism including discussion about how test results can be interpreted and used in light of context. The paper highlights the importance of educators undertaking professional learning to interpret and use standardised test data to inform learning and teaching of students with autism. It provides insight into ways in which teacher beliefs about their students can influence interpretations of assessment data. The next steps in this research involve capturing detailed teacher observations and uses of standardised test data from students with autism as they develop skills in data use and interpretation.
References
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: Author. Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3. 77-101. Chase, A-M., Richardson, K., & Reinertsen, N. (2021). What sources of data did teachers use to inform remote teaching under COVID-19? https://www.bera.ac.uk/publication/what-sources-of-data-did-teachers-use-to-inform-remote-teaching-under-covid-19 Cohen, L. Manion, L., & Morrison, K. (2007). Research methods in education (6th ed.). Routledge. Keutel, M., & Werner, M. (2011). Interpretive case study research: Experiences and recommendations. Mediterranean Conference on Information Systems. Matson, J. L. (Ed.). (2016). Comorbid conditions among children with autism spectrum disorder. Springer International Publishing. Miles M.B. & Huberman, A. M. (1994). Qualitative data analysis: An Expanded Source Book. Sage Publications. Trammel, B., Wilczynski, S. M., Dale, B., & McIntosh, D. E. (2013). Assessment and differential diagnosis of comorbid conditions in adolescents and adults with autism spectrum disorders. Psychology in the Schools, 50(9), 936-946. doi:10.1002/pits.21720 Verger, A., Parcerisa, L., & Fontdevila, C. (2019). The growth and spread of large-scale assessments and test-based accountabilities: A political sociology of global education reforms. Educational Review, 71(1), 5-30. https://www.tandfonline.com/doi/full/10.1080/00131911.2019.1522045 Witmer & Roschmann, (2020) An Examination of Measurement Comparability for a School Accountability Test among Accommodated and Non-Accommodated Students with Autism Education and Training in Autism and Developmental Disabilities, v55 n2 p173-184 Jun 2020
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.