The impact of student assessment according with TERCE
Author(s):
Laura Cañadas (presenting / submitting) Cynthia Martinez-Garrido (presenting)
Conference:
ECER 2016
Format:
Paper

Session Information

09 SES 03 C, A Spotlight on Latin America

Paper Session

Time:
2016-08-23
17:15-18:45
Room:
NM-F107
Chair:
Pablo Fraser

Contribution

Assessment is an essential component in education process. Traditionally, classroom assessment has had a summative purpose. Summative assessment goal is to evaluate students’ learning at the end of an instructional unit or at the end of the semester to compare it against some standard to assess what has been learned and how well it was learned. It provides information that sums up the learning process. The most common outcome of summative assessment are the grades, which indicate students’ level of knowledge, and the most widely instrument used is the exam.

However, assessment may also serve as a formative function. Formative assessment goal is to continuously gather evidence about learning and teaching process (Heritage, 2007). This evidence is used (i) to give students feedback directly related to task, and to make changes in learning activities students are involved in and (ii) to adapt teaching to students’ needs, helping them to develop skills to learn better and to go forward in their learnings. Formative assessment also promotes active involvement of students in their partners and in their own assessment. Formative assessment contributes to students’ learning, influencing learning process and academic achievement (Weurlander, Söderberg, Scheja, Hult and Wernerson, 2012).

There are many researchers around the world who study student assessment. Their results and conclusions are intended to be extrapolated to the population. However, the belief that what is done in some places can have universal validity is a fallacy. Research results can only be valid if they are obtained or referred to the context where they will be applied. Unfortunately, the student assessment knowledge base is very limited in developing countries.

In Latin America, educational assessment is principally focused on external evaluation to certify the educational level of each country regarding some standards or some rankings (Ferrer, 2006). However, this kind of assessment does not seem to be the most suitable for these countries. This is a territory characterized, among others, by its educational disparity. Gaps between different social strata where educational systems are involved cause these inequalities. (Rivero, 2000). Nevertheless, education itself and the components of the educational process can perpetuate or even intensify these inequities, bringing about differences in students’ learning. (Blanco Bosco, 2009). Therefore, this kind of evaluation based on national measures and international comparisons end up legitimizing differences (Murillo y Roman, 2010).

Previous researches conducted in Latin America about how assessment influences students’ academic achievement, the most remarkable study is the one developed in 2015 by Martinez-Garrido in 8 Latin American countries (Bolivia, Chile, Colombia, Cuba, Ecuador, Panama, Perú and Venezuela) and Spain, with 5722 participants. Results showed that collecting students’ notebooks had an impact of 5 points in Language performance and 7.8 points in Mathematics performance. They also found that feedback provided to students meant an improvement of 2 points in Language performance.

The aim of this research is to:

  • Determine the impact that assessment has on Primary students’ academic achievement in Reading and Mathematics performance

Method

To achieve our aim, we use the data of the Third Regional Comparative and Explanatory Study (TERCE, UNESCO). We conducted a multilevel model with three levels: student, school and country. We used the value-added approach, discounting sociodemographic students’ factors to determine more reliably the factors that influence learning. To this study a stratified sampling design by conglomerates and bi-stage sampling was carried out. Information about 105,847 students, in 5,733 schools from 15 Latin American countries was obtained (Argentina, Brazil, Chile, Colombia, Costa Rica, Dominican Republic, Ecuador, Guatemala, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, and Uruguay). Variables: 1. Product variables: o Mathematic performance o Reading performance, both obtained through TRI, mean of 500 and a standard deviation of 50. 2. Adjustment variables: o Cultural level of students’ families, obtained from the average of the highest educational level of parents; o Socio-economic level of students’ families, obtained from parents’ jobs and family possessions; o School socioeconomic level, obtained from the average of families’ socioeconomic level in each school o Students’ gender; o Students’ origin, native or inmigrant; o Native language, Spanish or another; and o Years of pre-schooling. 3. Explanatory variables: - Assessment function: formative (assess student learning progress) or summative (just put a grade to the student) - The use of continuous assessment to assess students’ performance - The use of exams for assessing students’ learning - The use of individual work - Adapt the assessment to students: o Use the same test for all the students o Use different tasks according to the knowledge of students - Communicate the results of assessment to the families These variables were collected with: 1) Performance tests in Mathematics and Reading. 2) Students’ questionnaire. 3) Families’ questionnaire. 4) Teachers’ questionnaire with information about the assessment used in their classes. To determine the impact of assessment in students’ academic achievement 4 level multilevel models were used. They were usually used on this kind of studies (p.e. Goldstein, 2011; Murillo y Martínez-Garrido, 2013). Its use is justified for the nested data in different levels (country, school, classroom and student).

Expected Outcomes

The process of multilevel modeling needed to determine the impact of assessment in students’ academic achievement provides interesting results about the influence of the adjustment variables each yields studied. The calculation of the final model provides an overview of what are the contributions that each teacher evaluation variable generated in the development of each competency studied. The results obtained in the adjusted model show that: - The economic and cultural Index (ISEC) has a strong influence on the student’s performance in both areas studied, especially in Reading. So, for every ISEC standard deviation above the average, students get 26 points in Reading. - Gender affects the performance of the two product variables. That is, children get different results. Girls 4 points more in Reading. - There are differences depending on the origin of the student, so that foreign children obtain lower results than native students on both areas. Our data suggest that immigrant students get 1.6 points lower in Reading and, 1 point lower in Mathematics comparing to native students in the same classroom. Final model calculation shows that: - The frequency of the assessment of individual work improves students’ performance in Reading (Student performance raises 3 points for every point increase on the frequency of the assessment of individual work) - Continuous assessment of students improves performance in both areas: Reading and Mathematics (2 points). - Carrying out final exams delays student performance. Indeed, students earn 3 points less on their development in Mathematics and Reading for each point increase on the frequency of the exams. - Give feedback to students impacts the performance of students. Specifically, it improves performance in Mathematics (1 point for each point increased by the frequency with which the teacher gives feedback to students).

References

Blanco Bosco, E. (2009). La desigualdad de resultados educativos: aportes a la teoría desde la investigación sobre eficacia escolar. Revista Mexicana de Investigación Educativa, 14 (43), 1019-1049. Ferrer, G. (2006). Sistemas de Evaluación de Aprendizajes en América Latina Balance y Desafíos. PREAL Goldstein, H. (1987). Multilevel models in educational and social research. Londres: Charles Griffin & Co. Heritage, M. (2007). Formative assessment: What do teachers need to know and do? Phi Delta Kappan, 89 (2), 140-145. Martinez-Garrido, C. (2015). La investigación sobre enseñanza eficaz. Un estudio multinivel para Latinoamérica. Madrid: UAM Ediciones Murillo, F.J. y Martínez-Garrido, C. (2013). Impact of Homework on Academic Performance. A Study of Iberoamerican Students of Primary Education //Incidencia de las tareas para casa en el rendimiento académico. Un estudio con estudiantes iberoamericanos de Educación Primaria. Revista de Psicodidáctica/Journal of Pscychodidactics, 18(1), 157-171. Murillo, J y Roman, M. (2010). Retos en la evaluación de la calidad de la educación en América Latina. Revista Iberoamericana de Educación, 53, 97-120 Rivero, J. (2000). Reforma y desigualdad en América Latina. Revista Iberoamericana de Educación, 23 Weurlander, M., Söderberg, M., Scheja, M., Hult, H. y Wernerson, A. (2012). Exploring formative assessment as a tool for learning: students’ experiences of different methods of formative assessment. Assessment and Evaluation in Higher Education, 37(6), 747-460.

Author Information

Laura Cañadas (presenting / submitting)
Autonomous University of Madrid
Madrid
Zaragoza University, Spain

Update Modus of this Database

The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER. 

Search the ECER Programme

  • Search for keywords and phrases in "Text Search"
  • Restrict in which part of the abstracts to search in "Where to search"
  • Search for authors and in the respective field.
  • For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
  • If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.