Skip to main content

Advertisement

Log in

Towards transparent and trustworthy prediction of student learning achievement by including instructors as co-designers: a case study

  • Published:
Education and Information Technologies Aims and scope Submit manuscript

Abstract

Providing educators with understandable, actionable, and trustworthy insights drawn from large-scope heterogeneous learning data is of paramount importance in achieving the full potential of artificial intelligence (AI) in educational settings. Explainable AI (XAI)—contrary to the traditional “black-box” approach—helps fulfilling this important goal. We present a case study of building prediction models for undergraduate students’ learning achievement in a Computer Science course, where the development process involves the course instructor as a co-designer, and with the use of XAI technologies to explain the underlying reasoning of several machine learning predictions. The explanations enhance the transparency of the predictions and open the door for educators to share their judgments and insights. It further enables us to refine the predictions by incorporating the educators’ contextual knowledge of the course and of the students. Through this human-AI collaboration process, we demonstrate how to achieve a more accountable understanding of students’ learning and drive towards transparent and trustworthy student learning achievement prediction by keeping instructors in the loop. Our study highlights that trustworthy AI in education should emphasize not only the interpretability of the predicted outcomes and prediction process, but also the incorporation of subject-matter experts throughout the development of prediction models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  • Afzaal, M., Nouri, J., Zia, A., Papapetrou, P., Fors, U., Wu, Y., & Weegar, R. (2021, June). Generation of automatic data-driven feedback to students using Explainable Machine Learning. In International Conference on Artificial Intelligence in Education (pp. 37–42).

  • Al-Shabandar, R., Hussain, A. J., Liatsis, P., & Keight, R. (2019). Detecting at-risk students with early interventions using machine learning techniques. Ieee Access : Practical Innovations, Open Solutions, 7, 149464–149478.

    Article  Google Scholar 

  • Anwar, M. (2021). Supporting privacy, trust, and personalization in online learning. International Journal of Artificial Intelligence in Education, 31(4), 769–783.

    Article  Google Scholar 

  • Azeta, A. A., Ayo, C. K., Atayero, A. A., & Ikhu-Omoregbe, N. A. (2009, January). A case-based reasoning approach for speech-enabled e-learning system. In 2009 2nd International Conference on Adaptive Science & Technology (ICAST) (pp. 211–217). IEEE.

  • Ben-Zadok, G., Leiba, M., & Nachmias, R. (2010). Comparison of online learning behaviors in school vs. at home in terms of age and gender based on log file analysis. Interdisciplinary Journal of E-Learning and Learning Objects, 6(1), 305–322.

    Google Scholar 

  • Botelho, A. F., Baker, R. S., & Heffernan, N. T. (2019, July). Machine-learned or expert-engineered features? Exploring feature engineering methods in detectors of student behavior and affect. In The twelfth international conference on educational data mining

  • Cheema, J. R., & Sheridan, K. (2015). Time spent on homework, mathematics anxiety and mathematics achievement: Evidence from a US sample. Issues in Educational Research, 25(3), 246–259.

    Google Scholar 

  • Ciolacu, M. I., & Svasta, P. (2021, April). Education 4.0: AI empowers smart blended learning process with Biofeedback. In 2021 IEEE Global Engineering Education Conference (EDUCON) (pp. 1443–1448). IEEE.

  • Conati, C., Porayska-Pomsta, K., & Mavrikis, M. (2018). AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling. arXiv preprint arXiv:1807.00154.

  • De Jong, R., Westerhof, K. J., & Creemers, B. P. (2000). Homework and student math achievement in junior high schools. Educational research and Evaluation, 6(2), 130–157.

    Article  Google Scholar 

  • Dollinger, M., Liu, D., Arthars, N., & Lodge, J. M. (2019). Working together in learning analytics towards the co-creation of value. Journal of Learning Analytics, 6(2), 10–26.

    Article  Google Scholar 

  • Duan, X., Wang, C., & Rouamba, G. (2022). Designing a Learning Analytics Dashboard to Provide Students with Actionable Feedback and Evaluating Its Impacts. In CSEDU (2) (pp. 117–127).

  • Er, E., Gomez-Sanchez, E., Bote-Lorenzo, M. L., Dimitriadis, Y., & Asensio-Pérez, J. I. (2020). Generating actionable predictions regarding MOOC learners’ engagement in peer reviews. Behaviour & Information Technology, 39(12), 1356–1373.

    Article  Google Scholar 

  • Fan, H., Xu, J., Cai, Z., He, J., & Fan, X. (2017). Homework and students’ achievement in math and science: A 30-year meta-analysis, 1986–2015. Educational Research Review, 20, 35–54.

    Article  Google Scholar 

  • Fernández-Alonso, R., Suárez-Álvarez, J., & Muñiz, J. (2015). Adolescents’ homework performance in mathematics and science: Personal factors and teaching practices. Journal of educational psychology, 107(4), 1075.

    Article  Google Scholar 

  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and machines, 28(4), 689–707.

    Article  PubMed  PubMed Central  Google Scholar 

  • Goel, Y., & Goyal, R. (2020). On the effectiveness of self-training in mooc dropout prediction. Open Computer Science, 10(1), 246–258.

    Article  Google Scholar 

  • Hänsch, N., Schankin, A., Protsenko, M., Freiling, F., & Benenson, Z. (2018). Programming experience might not help in comprehending obfuscated source code efficiently. In Fourteenth Symposium on Usable Privacy and Security (SOUPS 2018) (pp. 341–356).

  • Heras, S., Palanca, J., Rodriguez, P., Duque-Méndez, N., & Julian, V. (2020). Recommending learning objects with arguments and explanations. Applied Sciences, 10(10), 3341.

    Article  CAS  Google Scholar 

  • Hershkovitz, A., & Ambrose, A. (2022). Insights of instructors and advisors into an early prediction model for non-thriving students. Journal of Learning Analytics, 9(2), 202–217.

    Article  Google Scholar 

  • Hershkovitz, A., & Nachmias, R. (2009). Consistency of Students’ Pace in Online Learning. International Working Group on Educational Data Mining.

  • Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021, March). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 624–635).

  • Jang, Y., Choi, S., Jung, H., & Kim, H. (2022). Practical early prediction of students’ performance using machine learning and eXplainable AI. Education and Information Technologies, 1–35.

  • Jiang, Y., Bosch, N., Baker, R. S., Paquette, L., Ocumpaugh, J., Andres, J. M., & Biswas, G. (2018, June). Expert feature-engineering vs. deep neural networks: which is better for sensor-free affect detection?. In International conference on artificial intelligence in education (pp. 198–211). Springer, Cham.

  • Khosravi, H., Shum, S. B., Chen, G., Conati, C., Tsai, Y. S., Kay, J., & Gašević, D. (2022). Explainable artificial intelligence in education. Computers and Education: Artificial Intelligence, 3, 100074.

    Google Scholar 

  • Kim, W. H., & Kim, J. H. (2020). Individualized AI tutor based on developmental learning networks. Ieee Access : Practical Innovations, Open Solutions, 8, 27927–27937.

    Article  Google Scholar 

  • Kitsantas, A., Cheema, J., & Ware, H. W. (2011). Mathematics achievement: The role of homework and self-efficacy beliefs. Journal of Advanced Academics, 22(2), 310–339.

    Article  Google Scholar 

  • Kloft, M., Stiehler, F., Zheng, Z., & Pinkwart, N. (2014, October). Predicting MOOC dropout over weeks using machine learning methods. In Proceedings of the EMNLP 2014 workshop on analysis of large-scale social interaction in MOOCs (pp. 60–65).

  • Knowles, B., & Richards, J. T. (2021, March). The sanction of authority: Promoting public trust in ai. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 262–271).

  • Kurdi, G., Leo, J., Parsia, B., Sattler, U., & Al-Emari, S. (2020). A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30(1), 121–204.

    Article  Google Scholar 

  • Lee, U. J., Sbeglia, G. C., Ha, M., Finch, S. J., & Nehm, R. H. (2015). Clicker score trajectories and concept inventory scores as predictors for early warning systems for large STEM classes. Journal of Science Education and Technology, 24(6), 848–860.

    Article  ADS  Google Scholar 

  • Levin, N. A. (2021). Process mining combined with Expert Feature Engineering to predict efficient use of time on high-stakes assessments. Journal of Educational Data Mining, 13(2), 1–15.

    Google Scholar 

  • Li, Z. (2022). Extracting spatial effects from machine learning model using local interpretation method: An example of SHAP and XGBoost. Computers Environment and Urban Systems, 96, 101845.

    Article  Google Scholar 

  • Li, J., Li, H., Majumdar, R., Yang, Y., & Ogata, H. (2022, March). Self-directed Extensive Reading Supported with GOAL System: Mining Sequential Patterns of Learning Behavior and Predicting Academic Performance. In LAK22: 12th International Learning Analytics and Knowledge Conference (pp. 472–477).

  • Lu, S., Chen, R., Wei, W., Belovsky, M., & Lu, X. (2021). Understanding Heart Failure Patients EHR Clinical Features via SHAP Interpretation of Tree-Based Machine Learning Model Predictions. In AMIA Annual Symposium Proceedings (Vol. 2021, p. 813). American Medical Informatics Association.

  • Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.

  • Macfadyen, L. P., & Dawson, S. (2010). Mining LMS data to develop an “early warning system” for educators: A proof of concept. Computers & education, 54(2), 588–599.

    Article  Google Scholar 

  • Mahbooba, B., Timilsina, M., Sahal, R., & Serrano, M. (2021). Explainable artificial intelligence (XAI) to enhance trust management in intrusion detection systems using decision tree model. Complexity, 2021.

  • Marras, M., Vignoud, J. T. T., & Kaser, T. (2021). Can feature predictive power generalize? benchmarking early predictors of student success across flipped and online courses. In 14th International Conference on Educational Data Mining (pp. 150–160).

  • Matcha, W., Gašević, D., & Pardo, A. (2019). A systematic review of empirical studies on learning analytics dashboards: A self-regulated learning perspective. IEEE Transactions on Learning Technologies, 13(2), 226–245.

    Article  Google Scholar 

  • Mokhtari, K. E., Higdon, B. P., & Başar, A. (2019, November). Interpreting financial time series with SHAP values. In Proceedings of the 29th Annual International Conference on Computer Science and Software Engineering (pp. 166–172).

  • Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 116(44), 22071–22080.

  • Nazaretsky, T., Cukurova, M., & Alexandron, G. (2022, March). An Instrument for Measuring Teachers’ Trust in AI-Based Educational Technology. In LAK22: 12th international learning analytics and knowledge conference (pp. 56–66).

  • Ndiyae, N. M., Chaabi, Y., Lekdioui, K., & Lishou, C. (2019, March). Recommending system for digital educational resources based on learning analysis. In Proceedings of the New Challenges in Data Sciences: Acts of the Second Conference of the Moroccan Classification Society (pp. 1–6).

  • Pejić, A., & Molcer, P. S. (2021). Predictive machine learning approach for complex problem solving process data mining. Acta Polytechnica Hungarica, 18(1), 45–63.

    Article  Google Scholar 

  • Prieto-Alvarez, C. G., Martinez-Maldonado, R., & Anderson, T. D. (2018). Co-designing learning analytics tools with learners. In Learning Analytics in the Classroom (pp. 93–110)

  • Romero, C., López, M. I., Luna, J. M., & Ventura, S. (2013). Predicting students’ final performance from participation in on-line discussion forums. Computers & Education, 68, 458–472.

    Article  Google Scholar 

  • Rong, Q., Lian, Q., & Tang, T. (2022). Research on the Influence of AI and VR Technology for Students’ Concentration and Creativity. Frontiers in Psychology, 13.

  • Rotelli, D., & Monreale, A. (2022, March). Time-on-Task Estimation by data-driven Outlier Detection based on Learning Activities. In LAK22: 12th International Learning Analytics and Knowledge Conference (pp. 336–346).

  • Sarmiento, J. P., & Wise, A. F. (2022, March). Participatory and Co-Design of Learning Analytics: An Initial Review of the Literature. In LAK22: 12th International Learning Analytics and Knowledge Conference (pp. 535–541)

  • Shneiderman, B. (2020). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS), 10(4), 1–31.

    Article  Google Scholar 

  • Swamy, V., Radmehr, B., Krco, N., Marras, M., & Käser, T. (2022). Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs. arXiv preprint arXiv:2207.00551.

  • Syed, M., Anggara, T., Lanski, A., Duan, X., Ambrose, G. A., & Chawla, N. V. (2019, March). Integrated closed-loop learning analytics scheme in a first year experience course. In Proceedings of the 9th international conference on learning analytics & knowledge (pp. 521–530).

  • Szafir, D., & Mutlu, B. (2013, April). ARTFul: adaptive review technology for flipped learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1001–1010).

  • Takami, K., Dai, Y., Flanagan, B., & Ogata, H. (2022, March). Educational Explainable Recommender Usage and its Effectiveness in High School Summer Vacation Assignment. In LAK22: 12th International Learning Analytics and Knowledge Conference (pp. 458–464).

  • Thiebes, S., Lins, S., & Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31(2), 447–464.

    Article  Google Scholar 

  • Thornton, L., Knowles, B., & Blair, G. (2021, March). Fifty Shades of Grey: In Praise of a Nuanced Approach Towards Trustworthy Design. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 64–76).

  • Trautwein, U. (2007). The homework–achievement relation reconsidered: Differentiating homework time, homework frequency, and homework effort. Learning and instruction, 17(3), 372–388.

    Article  Google Scholar 

  • Vincent-Lancrin, S., & van der Vlies, R. (2020). Trustworthy artificial intelligence (AI) in education: Promises and challenges.

  • Zhang, J. H., Zou, L. C., Miao, J. J., Zhang, Y. X., Hwang, G. J., & Zhu, Y. (2020). An individualized intervention approach to improving university students’ learning performance and interactive behaviors in a blended learning environment. Interactive Learning Environments, 28(2), 231–245.

    Article  Google Scholar 

  • Zhang, M., Guo, H., & Liu, X. (2021). Using Keystroke Analytics to understand cognitive processes during writing. International Educational Data Mining Society.

Download references

Acknowledgement

This research was supported in part by the U.S. National Science Foundation through grant IIS1955395.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaojing Duan.

Ethics declarations

Conflict of interest

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Duan, X., Pei, B., Ambrose, G.A. et al. Towards transparent and trustworthy prediction of student learning achievement by including instructors as co-designers: a case study. Educ Inf Technol 29, 3075–3096 (2024). https://doi.org/10.1007/s10639-023-11954-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10639-023-11954-8

Keywords

Navigation