Session Information
09 SES 12 A, Theoretical and Methodological Issues in Tests and Assessments (Part 2)
Paper Session continues from 09 SES 08 A
Contribution
This research is grounded in the intersecting theoretical frameworks of validity of assessments (Messick, 1989), and the use of Item Response Theory (IRT) to analyse and report student achievement tests. The research questions seek to investigate the use of data from those assessments and in particular cases where guessing is an acknowledged and in some cases encouraged in student response strategies in multiple choice tests.
In most large scale assessment tasks that involve multiple choice items assessed using Item Response Theory, guessing is either unaccounted for (Rasch, 1960) or treated as a property of the item calibration model, (Birnbaum, 1968, Hambleton et al,1985,1991). It is contention within this paper that guessing is a function of the test taker – a person based parameter rather than a global function of an item, and as such should be identified in the person response patterns. Central to this case study is a field study that attempts to identify patterns of responses, and characteristics of analyses that may a priory identification of guessing in a student’s result and hence provide a mechanism to validly account for any potential mis-information or statistical errors in reporting student performances.
The Macquarie Dictionary (2002) defines ‘guessing’ as “to form an opinion of at random or from evidence admittedly uncertain”. Educationalists (Frary et al., 1967; Lord, 1964) contend that in student responses to multiple choice items two types of guessing may be present: (1) the educated guess, made using judgement and a level of knowledge that is more likely to be correct; and (2) a random guess, made without regard for the information provided in the item.
The raising of the stakes (Andrich, 2014) in full cohort testing programs has led to an increase in the amount of guessing as manifested by the reduction in omit rates in large scale assessments.George Madaus (2002) contendsthatthe higher the stakes involved in testing, the less likely you are to get an accurate measurement of the construct you most want to measure.
Many of major large-scale assessments cited below use Item Response Theory as the underlying theoretical and conceptual framework to estimate student achievement. For instance, PISA and NAPLAN use the Rasch model (Rasch, 1960), which requires that the probability of a student correctly responding to the cognitive demands of any particular test question is a function of the difficulty of the question and the ability of the student in relation to the characteristic (or trait) being assessed. In contrast, TIMSS and PIRLS apply variants of the Item Response Theory model that attempt to take account of specific characteristics of the item-student interaction in regard to the discrimination of the items that comprise the test, and in some cases an attempt to account for guessing.
Given the lack of clarity regarding how and to what extent guessing is accounted for in these various models this research will further investigate the impact of guessing on the estimation of item difficulty and its impact on the consequent estimation of student ability.
In essence this research will attempt to investigate the following questions:
- How is guessing accounted for in modern analysis approaches?
- How can guessing be identified in student response patterns?
- What is the impact of guessing on the calibration of student achievement?
The principle research technique will be fieldwork to inform the identification of guessing in student responses compared to those that may be reflecting other item characteristics and item-person interactions such as misconceptions, ambiguity or misunderstanding, or elimination techniques.
Method
Expected Outcomes
References
1. Andrich, D., Marais, I., & Humphry, S. (2011) Using a Theorem by Anderson and the Dichotomous Rasch Model to Assess the Presence of Random Guessing in Multiple Choice Items. Journal of Educational and Behavioural Statistics, 37:417. 2. Frary, A.B., Cross, L.H. & Lowry, S.R. (1977) Random Guessing, Correction for Guessing and Reliability of Multiple-Choice Test Scores. The Journal of Experimental Education. Vol. 46, No. 1 (Fall, 1977), pp. 11-15. 3. Lau, P. N. K., Lau, S. H., Hong, K. S., & Usop, H. (2011). Guessing, Partial Knowledge, and Misconceptions in Multiple-Choice Tests. Educational Technology & Society, 14 (4), 99–110. 4. Messick, S. (1989). Meaning and Values in Test Validation. The Science and Ethics of Assessment. Educational Researcher, 18, (2) 5-11. 5. Waller, M.I. (1974) Removing the Effects of Random Guessing from Latent Trait Ability Estimates. Educational Testing Service, Princeton N.J. ETS-RB-74-32.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.