Learning Analytics in Secondary Education: Assessment for Learning in 7th Grade Language Teaching
Author(s):
Conference:
ECER 2017
Format:
Paper

Session Information

27 SES 13 B, Language and Education Research

Paper Session

Time:
2017-08-25
13:30-15:00
Room:
K3.05
Chair:
Anke Wegner

Contribution

Learning analytics can make teaching and learning more efficient. Learning analytics can be defined as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs”(Siemens & Gasevic, 2012). Data from online tests not only provide feedback for the learners, but inform teachers as well about both the performance level and details of the test performance of their students. Teachers might use these insights in the preparation of their lessons and teaching in class. However, most research on learning analytics has been done in the domain of higher education and on data generated from online learning environments. Moreover, a large body of research has focused on supporting students with tools that enhance students’ awareness of their activities in, for example, collaborative learning (cf., Dehler, Bodemer, Buder, & Hesse, 2011; Jermann, Soller, & Muehlenbrock, 2005). Yet learning analytics can inform teachers in similar ways (cf. Van Leeuwen, Janssen, Erkens, & Brekelmans, 2014) and can also make off-line teaching and learning more efficient, in which teachers adapt their pedagogy on the basis of online test performances of their students.

Formative feedback or assessment for learning seems to be a crucial concept here. Assessment for learning can be defined as follows. “Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, that the decisions they would have taken in the absence of the evidence that was elicited” (Black & William, 2009. p.9). In assessment for learning, feedback includes not only information about the gap between the actual level and the reference level, it also includes information generated within a particular system and for a particular purpose, making feedback necessarily domain specific (William, 2011). Generally, literature suggests that the use of assessment to inform instruction might have significant impact on learning, with two features which appear to be important in designing assessment that will support learning (William, 2007, 2011). First, the evidence should be more than information about the gap between current and desired performance; it must also provide information about what kinds of instructional activities are likely to result in improving performance. Second, learners should be engaged in actions to improve learning, which might be remedial activities by the teacher, peer support or reflecting on different ways to move their learning forward.

In the current study, a case study has been carried out with five secondary language teachers using online performance data of their students to adapt their lesson plans and teaching in the next lessons. Three research questions were formulated:

  1. What kind of learner data do teachers use for their teaching practice?
  2. How do teachers use learner data in their instructional practice?
  3. How are these classroom instructions evaluated by students?

Method

Five teachers, one mentor and their 114 7th Grade students from one secondary school participated. During one school year, 12 cycles of two weeks were studied with each cycle starting with a meeting in the computer room at school in which students completed online Dutch language tests, followed by a team meeting with the teachers to prepare one particular lesson of the following week based on the learning analytics, the lesson itself and an evaluation team meeting, which started a new cycle. The software Got it Taal (Got it Language, https://www.thiememeulenhoff.nl/got-it) was used for the language tests. This online tool includes student tests in the domain of reading, spelling and grammar of Dutch language. For all students, taking a test followed the same procedure: 1) Students took a starting test to set the baseline of their competency in each domain; 2) Students completed a test at the level of their baseline; 3) Students completed the next tests at a lower or higher level, based on their test scores, and 4) Students continued with the next topic, if they had completed the most difficult tests and reached the maximum score The tests were either multiple-choice test or open-answer tests. With multiple-choice items, students got a second chance if an answer was incorrect. If the second choice was again incorrect, the tool provided students with the correct option. After completing the open-answering items, all correct answers were kept and incorrect answers were deleted; in the next step, students only revised the incorrect answers. Students could ask for online help, some additional written clarifications or explanations presented in a video clip. Data collection included students’ language test scores, lesson preparation forms, reports of the 12 team meetings and a start-up meeting, video recording of the 55 lessons, and interviews with five teachers, one mentor and 47 students. Ten students were interviewed twice or three times with in total 60 student interviews. Each student interview lasted about 20 minutes and started with a short test on the particular topic to confirm the student’s performance level. Then a short piece of the video clip of a feedback sessions in class was shown and students were asked to describe what was happening, and how they felt about the situation, what the teacher did and the feedback they received.

Expected Outcomes

Teachers had difficulties with the interpretation of the language test scores from Got it. They reported that they were not supported enough with clear information about students’ performance. Therefore, the software editor adapted the presentation tool during the project period with more details about the tasks and how students completed these tasks (instead of only presenting their scores and mean group scores), an explicit link between the scores and the specific part of the language test and including the date students completed the tests. These changes helped the teachers to better interpret the learner data. Based on the assessments and the team meeting, the teachers used various forms of feedback during their lesson, ranging from individual feedback and additional tasks via working in pairs (mostly one poor performing student with one high performing student) to whole class instruction based on the learner data from the online tests. Finally, students were generally not very satisfied with an individual approach of their teacher during the lessons, Neither the poor performing students nor the high performing students evaluated an individual teaching approach positively as they felt that they get too much attention, were placed in a special position or did not learn much from this approach about particular topics of Dutch language. In the paper presentation, these findings will be elaborated and discussed in terms of usefulness of learning analytics for teaching in secondary schools and literature on requirements of assessment for learning to be successful.

References

Black, P. J., & William, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1) 5-31. Dehler, J., Bodemer, D., Buder, J., Hesse, F. W. (2011). Guiding knowledge communication in CSCL via group knowledge awareness. Computers in Human Behavior, 27(3), 1068-1078. Jermann, P., Soller, A., & Muehlenbrock, M. (2005). From mirroring to guiding: a review of state of the art technology for supporting collaborative learning. International Journal of Artificial Intelligence in Education, 15(4), 261-290. Leeuwen, A. van, Janssen, J., Erkens, G., & Brekelmans, M. (2014). Supporting teachers in guiding collaborating students: effects of learning analytics in CSCL. Computers & Education, 79(1), 28-39. Siemens, G., & Gasevic, D. (2012). Guest editorial- learning and knowledge analytics. Educational Technology & Society, 15(3), 1-2. Wiliam, D. (2007). Keeping learning on track: Classroom assessment and the regulation of learning. In F. K. Lester, Jr., (Ed.). Second handbook of mathematics teaching and learning (pp. 1053–1098). Greenwich, CT: Information Age Publishing. William, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3-14.

Author Information

Wilfried Admiraal (presenting / submitting)
Leiden University, Netherlands, The
Amsterdam University of Applied Sciences
Open Doors Education

Update Modus of this Database

The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER. 

Search the ECER Programme

  • Search for keywords and phrases in "Text Search"
  • Restrict in which part of the abstracts to search in "Where to search"
  • Search for authors and in the respective field.
  • For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
  • If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.