Session Information
09 SES 01 A, Advancing Educational Equity and Innovation: Policy, Measurement, and AI-Driven Interventions
Paper Session
Contribution
Enhancing the achievement level is one of the most crucial concerns among students. Students can improve their academic performance through various approaches, and one of the approaches is proper guidance. A tutor, for instance, can assist students to become more organized and maintain their focus for a better study process by providing feedback (Kahu & Picton, 2019). However, not everyone can always reach a tutor due to a variety of reasons such as limited availability of qualified tutor, geographical barriers and financial constraints causing the 2-sigma problem.
The 2-sigma problem of Benjamin Bloom (1984) indicates that students who received tutoring performed two standard deviations better than those who learned in traditional learning environments. The tutoring process reveals that many students possess the potential to achieve this high level of learning. Since one-on-one tutors are not easily accessible to all students, following a study plan is usually a more applicable way of studying. However, the challenge lies in the fact that most students do not know how to create effective study plans, which leads them to spend their time inefficiently and get lower grades (Chanamarn & Tamee, 2017). At this point, students can receive assistance from artificial intelligence (AI), which is easily accessible these days, based on their individual needs.
As a possible solution to 2-sigma problem, Corbett (2001) offered Intelligent Tutoring Systems (ITS). ITSs are software programs that utilize AI techniques to offer individualized instruction by understanding the subject matter, the learner, and the most effective teaching methods (Nwana, 1990). Furthermore, AI techniques in ITSs are used to generate adaptive feedback, customize adaptive learning content and navigation, identify learner characteristics, and assess student performance (Mousavinasab et al., 2021). However, such tutoring systems show clear academic benefits only when they are well-structured. Considering that, students need to be assessed properly to be assigned effective and well-structured study plans.
The assessment of students’ progress is essential to further improve students’ academic achievement. To provide each student with the appropriate support for their individual study needs, it is important to gather comprehensive information about students’ ability level while evaluating their performance. Also, using shorter tests that collect information with less error would be beneficial in creating effective study plans. In this context, compared to traditional linear tests adaptive tests such as Computerized Adaptive Tests (CAT) and Multistage Testing (MST) offer a better assessment process (Sari et al., 2016).
CAT is a computer-based assessment that adjusts the difficulty of questions based on the test taker’s estimated ability, using an algorithm. However, CAT also has several drawbacks. For example, computer literacy is needed to reduce the mode effect in computer-based testing (Alderson, 2000), and CAT requires a large and costly item pool, and generally does not allow test takers to review or skip items (Sari et al., 2016). Therefore, researchers have looked into ways to minimize the drawbacks of linear tests and CAT while incorporating the majority of their benefits into an MST (Yan et al., 2014). MST is a unique variation of CAT and more accurate and efficient when measuring a broad range of proficiency levels compared to linear tests (Yan et al., 2014).
Therefore, determining the ability level of students with MST has the potential to facilitate creating AI-based study plans. In this context, the research question of the current study is “To what extent does an AI-based weekly study plan, created according to MST results, improve high school students’ mathematics achievement over a 16-week period?”
Method
Participants The participants of the current study is 300 high school students, with 150 in the control group and 150 in the experimental group. Instruments To monitor students’ progress, 16 parallel tests based on MST was constructed. To calibrate items and estimate item parameters, approximately 900 items were selected from the item pool, and Multiple Matrix Sampling (MMS) design was applied. MMS method efficiently utilizes resources by reducing test duration for each participant, lowering assessment costs, and simplifying simultaneous estimation of multiple test parameters (Shoemaker, 1973). In this process, 28 booklets were prepared based on MMS, each containing 40 items. In each booklet, there are 30 non-rotating items and 10 rotating items. For example, booklet 1 is linked to booklet 2 with 10 items. Following that, item analysis using Item Response Theory (IRT) was conducted, and item pool was finalized. Based on the results, MST modules were created, whose content of MST design focuses on 9th and 10th grade mathematics topics. Procedure The weekly tests will be administered as MST following a three-stage process, structured according to 1-3-3 design (Yan et al., 2014). In the first stage, examinees start with the same set of moderately difficult items. In the second stage, a proficiency estimate will be calculated for each examinee, and that estimation will be utilized to select one of the modules containing easy, medium and hard levels of difficulty. After proficiency estimation, the module selection for the third stage will be completed. Throughout 16-week period, examinees will undergo the same MST design, but with new modules for each week. To improve mathematics achievement by getting study plans for a period of 16 weeks, students will register in the system. The system will provide students in the experimental group with multistage tests. After MST, a weekly study plan will be generated by AI for the next week for each student. This process will repeat every week for a 16-week period, while students in control group do not receive study plans. Data Analysis The data will be analyzed by using mixed ANOVA as it incorporates both between-subjects (different groups) and within-subject design (time). For the experimental group, the effects of weekly AI-based study plans on mathematics achievement over time will be measured. Also, the differences in mathematics achievement between experimental and control groups will be examined. First MST will be collected as pre-test and last MST as post-test from both groups.
Expected Outcomes
Item parameters have been determined after pilot implementation using the mirt package in R Studio (Chalmers, 2012). Based on the difficulty levels of the items, 16 multistage tests were prepared according to the item analysis. Students in the experimental group are expected to follow AI-based study plans, which will be assigned weekly after each MST. During 16 weeks, it is expected that mathematics achievement scores of the students in the experimental group will increase. Also, compared to the control group, after the intervention, the students in the experimental group will have significantly higher scores. Since the ability estimation of the participants will be analyzed based on IRT, individualized feedback will be provided to each student. With assigned individualized and AI-based study plans, participants will be able to recognize how to study efficiently and what topics to focus on. Therefore, the findings of this study are expected to show the potential of AI-based study plans not only in Türkiye but also in contributing to international education.
References
Alderson, J. C. (2000). Technology in testing: The present and the future. System, 28(4), 593-603. https://doi.org/10.1016/S0346-251X(00)00040-3 Bloom, B. S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4–16. https://doi.org/10.3102/0013189X013006004 Chalmers, R. P. (2012). mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1–29. https://doi.org/10.18637/jss.v048.i06 Chanamarn, N., & Tamee, K. (2017). Enhancing Efficient study plan for student with machine learning techniques. International Journal of Modern Education and Computer Science, 9(3), 1–9. https://doi.org/10.5815/ijmecs.2017.03.01 Corbett, A. T. (2001). Cognitive computer tutors: Solving the two-sigma problem. In Lecture notes in computer science (pp. 137–147). https://doi.org/10.1007/3-540-44566-8_14 Kahu, E. R., & Picton, C. (2019). The benefits of good tutor-student relationships in the first year. Student Success, 10(2), 23–33. https://search.informit.org/doi/10.3316/informit.592286276010855 Mousavinasab, E., Zarifsanaiey, N., R. Niakan Kalhori, S., Rakhshan, M., Keikha, L., & Ghazi Saeedi, M. (2021). Intelligent tutoring systems: A systematic review of characteristics, applications, and evaluation methods. Interactive Learning Environments, 29(1), 142–163. https://doi.org/10.1080/10494820.2018.1558257 Nwana, H. S. (1990). Intelligent tutoring systems: An overview. Artificial Intelligence Review, 4(4), 251–277. https://doi.org/10.1007/BF00168958 Sari, H. İ., Yahsi-Sari, H., & Huggins-Manley, A. C. (2016). Computer adaptive multistage testing: Practical issues, challenges and principles. Journal of Measurement and Evaluation in Education and Psychology, 7(2), 388-406. Shoemaker, D. M. (1973). Principles and Procedures of Multiple Matrix Sampling. Ballinger Publishing Company. Yan, D., Lewis, C., & von Davier, A. A. (2014). Overview of computerized multistage tests. In D. Yan, C. Lewis, & A. A. von Davier (Eds.), Computerized multistage testing: Theory and applications (pp. 3–20). New York: Chapman and Hall/CRC.
Update Modus of this Database
The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.