09 SES 14 B JS, Educational Research Infrastructure
Joint Paper Session NW 09 and NW 12
An issue at the center of strong epistemological, didactical and, sometimes, ideological debates is how to integrate results, methods, theoretical frameworks and tools of standardized assessments – that are designed in order to impact at a systemic level – into the local actions of teachers and schools (Looney, 2011). We describe a project (Gestinv, www.gestinv.it) intended to provide large-scale tools and models of action for addressing this issue. The project is centered on a web open Database containing 1,646 items administered in the Italian national standardized tests, with results, comments, didactics deepening, metadata, statistical analysis.
As Morris (2011) points out, starting from an analysis of the literature on empirical evidences, a number of lessons can be drawn from the literature which help in defining a framework. In our particular context (Italy) a national system of standardized assessment for mathematics (INVALSI) has been set up starting from 2008. This implementation encountered serious problems of acceptance from the teachers (Signorini, 2015). Our hypothesis is that standardized tests, if not considered only as a way for providing rankings or scores related to benchmarks, may provide a huge amount of information about mathematics learning and feedbacks about teaching processes. These information are contained not only in global scores but also in punctual phenomena, observed through the single items. These phenomena can be pointed out by using analysis based on the (qualitative and quantitative) methods and the theoretical constructs of the didactics of mathematics. Some examples of this use of the standardized assessment results for feeding the reflection on the teaching-learning processes are for instance Ferretti & Gambini (2018) for the problem of implementing “vertical” curricula and the permanence of difficulties in students' behavior and for an example of different theories networking for explaining a phenomenon, Giberti et al. (2016)); but for a very classical example one may quote the famous “buses problem” (Carpenter, 1983; Schoenfeld, 1985).
Our research is based on the idea of integrating the theoretical and operational results, methods and tools of standardized assessments - which operate at the system level - in the formative assessment that each teacher performs in their own class. Since its origins (Scriven, 1967), the main function attributed to formative assessment has been to use it as a regulator tool for teaching and learning. Referring to the current international scientific debate on this issue, we can say that formative assessment is characterized specifically as an assessment for learning (Weeden, Winter, Broadfoot, 2002; Allal, Laveault, 2009). This means that it has to be an assessment which is functional to backing up and promoting learning; it is embedded in the teaching-learning process in a dynamic way, modifying the teaching actions by following the needs of the students.
In detail, standardized assessments can firstly provide teachers with reference tools and parameters for their formative assessment. As Harlen (2000) suggests, formative assessment involves the discovery of what students have and have not achieved, as well as their strengths and weaknesses related to different content. Through the analysis of the skills and competences of the students, it is possible to create educational paths and to define appropriate methods (teaching and assessment methods) in accordance with the real needs of the learners (Gipps, 1994). Formative assessment requires as much objective and shared bases as possible, in order to understand what is really being evaluated and what information will actually be returned; in this direction, standardized assessments can provide significant help.
Our research aims to investigate if and how the use of the GESTINV database affect the knowledge, the skills and other meta-cognitive aspects about the use of standardized test in formative way.
Internationally, several studies are showing how, by integrating quantitative and qualitative analyzes, new ones can be identified research methodologies aimed at using the standardized test and results in an formative way. Starting from the most classical definitions of competence (Pellerey, 1983; Le Boterf, 1990), we have been construct indicators relating to acquired knowledge of the use of formative assessment starting from standardized assessment, practical skills and metacognitive aspects. To investigate this we have administered an online survey to all GESTINV database subscribers addressing us only to in-service and pre-service teachers. The survey administered begins with questions on a Likert Scale in which the knowledge of the formative assessment is investigated. With regard to the knowledge on the use of standardized assessment, it was decided to analyze the learning perceptions of the teachers by asking them open questions about how much the use of the database has increased or not their knowledge about it. Regarding the skills, we have chosen to monitor the perception of skills of the teachers, also asking to report examples of classroom practices in which they use the standardized tests in formative way. Finally, with regard to meta-competence indicators, we have chosen to analyze a macro-variables related to the sense of self-efficacy, as a hypothetically variable related to the use of formative assessment in the classroom (Norwegian Teacher Self-Efficacy Scale, NTSES, Skaalvik, Skaalvik, 2009),), asking teachers about each indicator whether or not this increased after using the database for classroom practice design.
Each item contained in the database GESTINV is indexed with punctual references to the objectives and the competence targets stated in the National Curricula and it is accompanied by detailed results and statistical data, and classifications in several categories. The Database contains also Guides, tools for interpreting the results, the frameworks of the tests, scientific articles with comments and in-deep analysis. The staff organizes periodical webinars for training to the use, or for presenting new sets of data (for instance, every year after the day of the assessment). The Database is intensively used in professional development programs developed by the schools. Starting from 2015 the Database has been used in teacher training programs implemented in about 200 schools, involving more than 5,500 teachers. The impact of this project is assessed quantitatively and qualitatively. For the quantitative evaluation, standard tools as the number of registered users (up-to January 2018, more than 10,000), the number accesses (in average, 200 every day), the time of permanence and other indicators are used. This research aims to detect the impact of the use of Database of the teachers skills about the use of standardized test and their results inside practices of formative assessment. We conjectured that use of Gestinv database can help them to increase these competences. A first elaboration of the data emerged from the questionnaires confirms our hypotheses. The complete analysis of the dataset that will come out of the research will allow us to better delineate the issue in terms of knowledge, skills and sense of self-efficacy.
Allal, L., Laveault, D. (2009). Assessment for learning. Evaluation-soutien d’apprentissage. Mésure et Evaluation en Education, 32(2), 99-104. Carpenter, T.P., Lindquist, M.M., Matthews, A., & Silver, E.A. (1983), Results of the third NAEP mathematics assessment: Secondary school. Mathematics Teacher 76(9), 652–659. Ferretti, F., Lemmo, A., & Maffia, A., (2015), “Half of something”: how students talk about rationals. Proceedings of the 39th Conference of the International Group for the Psychology of Mathematics Education, vol. 1, p.159. Hobart, Australia: PME. Ferretti, F., Gambini, A. (2018). A vertical analysis of difficulties in mathematics by secondary school to university level; some evidences stems from standardized assessment. In: Dooley, T. & Gueudet, G.. (Eds.) (2017). Proceedings of the Tenth Congress of the European Society for Research in Mathematics Education (CERME10). Dublin, Ireland: DCU and ERME. Giberti, C., Zivelonghi, A., & Bolondi G. (2016). Gender differences and didactic contract: analisys of two INVALSI tasks on powers properties. In Csíkos, C., Rausch, A., & Szitányi, J. (Eds.). Proceedings of the 40th Conference of the International Group for the Psychology of Mathematics Education, Vol. 2, pp. 275-282. Szeged, Hungary: PME. Gipps, C. (1994). Beyond testing: Towards a theory of educational assessment. London: The Falmer Press. Harlen, W. (2000). Teaching, learning and assessing science 5-12 (3rd ed.). London: Paul Chapman Publishing. Looney, J. W. (2011). Integrating Formative and Summative Assessment: Progress Toward a Seamless System?. OECD Education Working Papers, No. 58, OECD Publishing. Meckes, L. (2007). Evaluación y estándares: Logoros y desafíos para incrementar el impacto en calidad Educativa. Revista Pensamiento Educativo. 40(1). Morris, A. (2011). Student Standardized Testing: Current Practices in OECD Countries and a Literature Revie. OECD Education Working Papers, No. 65, OECD Publishing. Skaalvik, E. M., & Skaalvik, S. (2009). Does school context matter? Relations with teacher burnout and job satisfaction. Teaching and Teacher Education, 25, 518-524 Schoenfeld, A.H. (1985). Mathematical problem solving. Orlando, FL: Academic Press, 1985. Scriven, M. (1967). The metodology of evaluation. In R. E. Tyler, R. M. Gagnè, M. Scriven. Perspecive of corriculum evaluation. Chicago: AERA Monograph Series in Education. Signorini, G. (2015). Attitudes of teachers towards the external evaluation system for the assessment of mathematical learning, In: Konrad Krainer; Naďa Vondrová. Proceedings of the Ninth Congress of the European Society for Research in Mathematics Education (CERME 9). Prague, Czech Republic. pp. 3170-3171. Weeden, P., Winter, J., & Broadfoot, P. (2002). Assessment. What’s in it for schools. London: Routledge.
00. Central Events (Keynotes, EERA-Panel, EERJ Round Table, Invited Sessions)
Network 1. Continuing Professional Development: Learning for Individuals, Leaders, and Organisations
Network 2. Vocational Education and Training (VETNET)
Network 3. Curriculum Innovation
Network 4. Inclusive Education
Network 5. Children and Youth at Risk and Urban Education
Network 6. Open Learning: Media, Environments and Cultures
Network 7. Social Justice and Intercultural Education
Network 8. Research on Health Education
Network 9. Assessment, Evaluation, Testing and Measurement
Network 10. Teacher Education Research
Network 11. Educational Effectiveness and Quality Assurance
Network 12. LISnet - Library and Information Science Network
Network 13. Philosophy of Education
Network 14. Communities, Families and Schooling in Educational Research
Network 15. Research Partnerships in Education
Network 16. ICT in Education and Training
Network 17. Histories of Education
Network 18. Research in Sport Pedagogy
Network 19. Ethnography
Network 20. Research in Innovative Intercultural Learning Environments
Network 22. Research in Higher Education
Network 23. Policy Studies and Politics of Education
Network 24. Mathematics Education Research
Network 25. Research on Children's Rights in Education
Network 26. Educational Leadership
Network 27. Didactics – Learning and Teaching
The programme is updated regularly (each day in the morning)
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.