Session Information
Session 2B, Developing/Adapting Assessment Instruments and Systems
Papers
Time:
2005-09-07
17:00-18:30
Room:
Agric. LG20
Chair:
Sandra Johnson
Contribution
OBJECTIVES OF THE STUDY The objectives of this study were to:develop an instrument to assess secondary school students' perceptions of their assessment tasks, andvalidate the instrument's structure. BACKGROUND TO STUDY Despite the growth in emancipatory conceptualisations of classrooms that embrace a constructivist epistemology, little contemporary evidence exists to support the view that students are genuinely involved in assessment decision-making. That is, forms of assessment and specific assessment tasks employed in schools are overwhelmingly decided by administrators and teachers. Furthermore, even though reports like The Status and Quality of Teaching and Learning in Australia (Goodrum, Hackling, & Rennie, 2001) have asserted that assessment is a key component of the teaching and learning process, teachers tend to utilise a very narrow range of assessment strategies. In practice, there is little evidence that teachers actually use diagnostic or formative assessment strategies to inform planning and teaching (Radnor, 1996). In England, external accountability via a national curriculum is an entrenched part of the contemporary schooling landscape. A similar trend is developing in Australia with benchmarking, testing and reporting to authorities assuming great importance in schools today. The reality for students is one of almost complete exclusion from the assessment process: administrators and teachers have a role but students have no role. This paper focuses on this issue by reporting the development of an instrument to assess students' perceptions of assessment tasks. METHOD Student perceptual data on assessment tasks was collected from a sample consisting of 658 students in 11 secondary schools in Essex, England. Of these year 9 and 10 students, 312 were male and 346 were female. An intuitive-rational approach to the scale development was employed (Dorman, 2002; Hase & Goldberg, 1967). This approach has three main steps: (1) identification of salient dimensions, (2) writing and review of tentative items, and (3) field testing and refinement. The result of this process was a 40-item instrument called the Perception of Assessment Tasks Inventory (PATI). Scale reliability analysis and confirmatory factor analysis (CFA) were used to validate the PATI's structure. Model fit, model comparison and model parsimony indices are reported in the present paper (Byrne, 1998; Jöreskog & Sörbom, 1993). DEVELOPMENT OF PATIBased on an extensive review of contemporary school assessment literature, five tentative dimensions were identified: Congruence with Planned Learning, Authenticity, Student Consultation, Transparency, and Accommodation of Student Diversity. For each dimension, a set of 12 items was written. These items were submitted to a panel of measurement experts who provided advice on face validity and scale allocation. A tentative form of the PATI consisting of five 8-item scales was developed and field tested. VALIDATION OF PATIConfirmatory factor analysis substantiated the PATI's 5-scale structure. Fit indices indicted sound model fit, model comparison and model parsimony. All scales had sound internal consistency reliability with Cronbach coefficient alphas ranging from .63 for Accommodation of Student Diversity to .85 for Transparency. The discriminant validity of each scale ranged from .39 for Accommodation of Student Diversity to .51 for Transparency. The scales assess theoretically distinct but empirically overlapping assessment dimensions. CONCLUSIONThis paper reports original research on the assessment on student perceptions of assessment tasks. Despite the obvious reality that students are central to assessment in schools, little European research has been conducted on how students perceive assessment tasks and it is timely that an instrument for this purpose has been developed. Further research with the PATI in Europe is needed if students are to be active participants rather than passive recipients in the assessment process. REFERENCESByrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS: Basic concepts, applications and programming. Mahwah, NJ: Erlbaum.Dorman, J. P. (2002). Classroom environment research: Progress and possibilities. Queensland Journal of Educational Research, 18, 112-140.Goodrum, D., Hackling, M., & Rennie, L. (2001). The status and quality of teaching and learning in Australian schools. Department of Education, Training and Youth Affairs: Canberra.Hase, H. D., & Goldberg, L. G. (1967). Comparative validity of different strategies of constructing personality inventory scales. Psychological Bulletin, 67, 231-248.Jöreskog, K. G., & Sörbom, D. (1993). LISREL 8: User's reference guide. Chicago, IL: Scientific Software International.Radnor, H. (1996). Evaluation of key stage3 assessment in 1995 and 1996 (research report). Exeter: University of Exeter. Walberg, H. J. (1976). Psychology of learning environments: Behavioral, Structural, or perceptual? Review of Research in Education, 4, 142-178.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.