Can We Identify Guessing in Student Response Patterns? – a Preliminary Analysis of a Field Trial Design.
Author(s):
Chris Freeman (presenting / submitting)
Conference:
ECER 2015
Format:
Paper

Session Information

09 SES 12 A, Theoretical and Methodological Issues in Tests and Assessments (Part 2)

Paper Session continues from 09 SES 08 A

Time:
2015-09-11
09:00-10:30
Room:
326. [Main]
Chair:
Jan-Eric Gustafsson

Contribution

This research is grounded in the intersecting theoretical frameworks of validity of assessments (Messick, 1989), and the use of Item Response Theory (IRT) to analyse and report student achievement tests. The research questions seek to investigate the use of data from those assessments and in particular cases where guessing is an acknowledged and in some cases encouraged in student response strategies in multiple choice tests.

In most large scale assessment tasks that involve multiple choice items assessed using Item Response Theory,  guessing is either unaccounted for (Rasch, 1960) or treated as a property of the item calibration  model,  (Birnbaum, 1968, Hambleton et al,1985,1991). It is contention within this paper that guessing is a function of the test taker – a person based parameter rather than a global function of an item, and as such should be identified in the person response patterns. Central to this case study is a field study that attempts to identify patterns of responses, and characteristics of analyses that may a priory identification of guessing in a student’s result and hence provide a mechanism to validly account for any potential mis-information or statistical errors in reporting student performances.

The Macquarie Dictionary (2002) defines ‘guessing’ as “to form an opinion of at random or from evidence admittedly uncertain”. Educationalists (Frary et al., 1967; Lord, 1964) contend that in student responses to multiple choice items two types of guessing may be present: (1) the educated guess, made using judgement and a level of knowledge that is more likely to be correct; and (2) a random guess, made without regard for the information provided in the item.

The raising of the stakes (Andrich, 2014) in full cohort testing programs has led to an increase in the amount of guessing as manifested by the reduction in omit rates in large scale assessments.George Madaus (2002) contendsthatthe higher the stakes involved in testing, the less likely you are to get an accurate measurement of the construct you most want to measure.

Many of major large-scale assessments cited below use Item Response Theory as the underlying theoretical and conceptual framework to estimate student achievement. For instance, PISA and NAPLAN use the Rasch model (Rasch, 1960), which requires that the probability of a student correctly responding to the cognitive demands of any particular test question is a function of the difficulty of the question and the ability of the student in relation to the characteristic (or trait) being assessed. In contrast, TIMSS and PIRLS apply variants of the Item Response Theory model that attempt to take account of specific characteristics of the item-student interaction in regard to the discrimination of the items that comprise the test, and in some cases an attempt to account for guessing.

Given the lack of clarity regarding how and to what extent guessing is accounted for in these various models this research will further investigate the impact of guessing on the estimation of item difficulty and its impact on the consequent estimation of student ability.

 In essence this research will attempt to investigate the following questions:

  • How is guessing accounted for in modern analysis approaches?
  • How can guessing be identified in student response patterns?
  • What is the impact of guessing on the calibration of student achievement?

The principle research technique will be fieldwork to inform the identification of guessing in student responses compared to those that may be reflecting other item characteristics and item-person interactions such as misconceptions, ambiguity or misunderstanding, or elimination techniques.

Method

The over-arching method is to initially introduce students to an instrument that requires students to guess or apply problem solving techniques to respond to items. Curriculum- appropriate items will be presented in the Arabic langauge to English speaking students. A second stage of the data collection presents the same instrument to the same students in English so that comparisons of the student response patterns and correlations can be performed to attempt to identify and inform random guessing, informed guessing and the patterns of responses for students of varying abilities in mathematics at the target Year levels. Data will be collected from a convenience sample of two groups of students; one from Year 5 and the other from Year 7. The assessments administered to participants will be developed from a collection of items that have been previously calibrated and implemented in both the Arabic and English medium. Hence the relative performance of these items is known in each language medium. Design: 1. Random guessing and limited informed guessing (Trial 1) – request the sample to engage with a Mathematics test in the Arabic medium. Some items will have minimal context and result in random guessing. In some items students will have clues due to the commonality of the number system and the content, diagrams and relative commonality of the curriculum content at the student year level. In these items students may respond using informed strategies as opposed to random guessing and these responses may be a function of, and proportional to, mathematical ability. These data will inform the identification of random guessing patterns compared to partial knowledge responses and how these data are represented and reported in the analysis applications. 2. Ability guessing (Trial 2) – present the examination in Trial 1 in the English medium and investigate the response patterns at cohort, ability group and individual student level. Given the commonality of content and the appropriate targeting of the items to the sample year, students will be able to demonstrate their Mathematical ability in this test given its familiar medium, content and structure Plan for Analysis: Analysis of these data will be conducted using a Rasch model analysis program and a 3PL analysis program and the results compared. Student response patterns will also be interrogated to gain insight into how response patterns may assist in identifying guessing and hence how guessing manifests itself in the analysis programs and graphical output.

Expected Outcomes

Conclusions will be drawn on the basis of the relative applicability of different models and the reliability of the data and information produced as valid indicators of student performance on which to base critical educational policy and interventions. The final objective of the research is to produce a report outlining the findings of the focus questions and identify any further areas of research that might be precipitated by the outcomes. The data and findings will be shared with the participants and preliminary conclusions discussed.

References

1. Andrich, D., Marais, I., & Humphry, S. (2011) Using a Theorem by Anderson and the Dichotomous Rasch Model to Assess the Presence of Random Guessing in Multiple Choice Items. Journal of Educational and Behavioural Statistics, 37:417. 2. Frary, A.B., Cross, L.H. & Lowry, S.R. (1977) Random Guessing, Correction for Guessing and Reliability of Multiple-Choice Test Scores. The Journal of Experimental Education. Vol. 46, No. 1 (Fall, 1977), pp. 11-15. 3. Lau, P. N. K., Lau, S. H., Hong, K. S., & Usop, H. (2011). Guessing, Partial Knowledge, and Misconceptions in Multiple-Choice Tests. Educational Technology & Society, 14 (4), 99–110. 4. Messick, S. (1989). Meaning and Values in Test Validation. The Science and Ethics of Assessment. Educational Researcher, 18, (2) 5-11. 5. Waller, M.I. (1974) Removing the Effects of Random Guessing from Latent Trait Ability Estimates. Educational Testing Service, Princeton N.J. ETS-RB-74-32.

Author Information

Chris Freeman (presenting / submitting)
Australian Council for Educational Research, Australia

Update Modus of this Database

The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER. 

Search the ECER Programme

  • Search for keywords and phrases in "Text Search"
  • Restrict in which part of the abstracts to search in "Where to search"
  • Search for authors and in the respective field.
  • For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
  • If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.