Session Information
16 SES 12, ICT in Educational Contexts
Paper Session
Contribution
Based on the case of a developed and tested student feedback tool we present the results of our follow up studies in order to discuss critical issues of data-driven improvement of teaching and learning.
Teacher feedback for student learning has been in focus for a while (Hattie 2007). Similar systematic research on students’ feedback is either narrow or missing. Previous studies deal with student evaluations as part of the institutions quality standards or as method to identify exemplary teaching, teachers or schools. Other studies only focus on the students’ self-evaluation for learning outcomes or on students’ perception of teacher feedback (references). We assume that immediate and regular student feedback cycles are underestimated in relation to instructional improvement.
Research on data-driven improvement forms a similar picture. There are a tradition within school effectiveness research that in a rather sociological and organizational perspective deals with school leadership and learning outcomes (Leithwood & Louis 2012, Anderson, Leithwood & Strauss 2010). According to latest Swedish review on IT-based evaluation and feedback most studies focus on individual, most effective and automatic system-generated ways of feedback to the student (Hirsh/Lindberg 2015). The upcoming tradition of data-driven instructional improvement can more or less focus on learning outcomes or on the learning process (Robinson et al. 2009). We want to investigate the possibilities of data-driven instructional improvement by digital and systematic student feedback cycles.
For this purpose we designed, developed and tested an app-like tool, call Student Barometer (Danish: Elevbaro.dk) in relation to a national developmental and research project for Inclusion and differentiation in digital environments (Graf 2013). The tool makes it easy for teachers to collect student feedback quickly and easily on daily bases by mobile devices. The interface for the students is inspired by an analog feedback tool – The learning rating scale (Nissen 2012), and design as five stepless gliders between a positive and negative smiley. In this way the tool collects the students’ self-reflective statements on their engagement, feeling of being challenged, their need for help, their participation and task-consciousness. The interface for the teacher is divided in a sharing part for distributing the Barometer to the students and a data viewing part, where teachers can access different representations of the data. There are four basic views of the data: a bar chart and a box plot of the last rating of the whole class, development lines and box plots over multiple ratings.
The follow-up research project dealt with two interrelated research questions.
- In which ways can the digital feedback tool help teachers to gain new attention to students and improve the differentiation of teaching (and learning)?
- How can the Student Barometer be redesigned in terms of an smooth and functional tool.
From the beginning our stance is hat technology is the wrong driver (Fullan 2011). The feedback tool has to be seen in a broader, didactical concern. In order to investigate the teachers’ typical ways of dealing with student feedback we carried out a survey study with all teachers in the communality of Esbjerg in order to identify typical teacher practices in relation to student feedback. The survey was designed to provide answers to the following central questions: When, how and about what do teachers collect student feedback and for which purposes? How do teachers use the feedback and conclude on it?
Method
Expected Outcomes
References
Anderson, S., Leithwood, K., & Strauss, T. (2010). Leading Data Use in Schools: Organizational Conditions and Practices at the School and District Levels. Leadership and Policy in Schools, 9(3), 292–327. Fullan, M. (2011). Choosing the wrong drivers for whole system reform. Summary of Seminar Series Paper No. 204, Centre for Strategic Education. Graf, S. T. (2013). How can the Development of Digital Learning Environments make a Difference for Differentiation in Teaching? – An Intervention Study. IARTEM Textbooks and Educational Medie in a Digital Age, Ostrava, IARTEM. Hattie, J., & Timperley, H. (2007). The Power of Feedback. Review of Educational Hirsh, Å. og Viveca Lindberg (2015): Formativ bedömning på 2000-talet – en översikt av svensk och internationell forskning. Vitenskapsrådet Research Project on Inclusion and differentiation in digital environments at http://auuc.demonstrationsskoler.dk/ Kølsen, C., et al. (2014) Metoderapport i relation til baseline for demonstrationsskoleforsøg. http://pure.au.dk/portal/files/95993953/Bilag_B._Metoderapport_i_relation_til_baseline_for_demonstrationsskolefors_g.pdf Leithwood, K. & Louis, L.S. (2012): Linking Leadership to Student Learning. Jossey Bass. Nissen, P. (2012). "Hvordan måler man om eleverne lærer noget i skolen?" Økonomi & politik 85(2): 25-33. Robinson, V.; Hohepa, M. & Lloyd, C. (2009): School leadership and student outcomes: Identifying what works and why. Best Evidence Synthesis Iteration. Wellington: Ministry of Education, New Zealand.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.