Skip to main content

Explainable Attentional Neural Recommendations for Personalized Social Learning

  • Conference paper
  • First Online:
AIxIA 2020 – Advances in Artificial Intelligence (AIxIA 2020)

Abstract

Learning and training processes are starting to be affected by the diffusion of Artificial Intelligence (AI) techniques and methods. AI can be variously exploited for supporting education, though especially deep learning (DL) models are normally suffering from some degree of opacity and lack of interpretability. Explainable AI (XAI) is aimed at creating a set of new AI techniques able to improve their output or decisions with more transparency and interpretability. In the educational field it could be particularly significant and challenging to understand the reasons behind models outcomes, especially when it comes to suggestions to create, manage or evaluate courses or didactic resources. Deep attentional mechanisms proved to be particularly effective for identifying relevant communities and relationships in any given input network that can be exploited with the aim of improving useful information to interpret the suggested decision process. In this paper we provide the first stages of our ongoing research project, aimed at significantly empowering the recommender system of the educational platform “WhoTeach” by means of explainability, to help teachers or experts to create and manage high-quality courses for personalized learning.

The presented model is actually our first tentative to start to include explainability in the system. As shown, the model has strong potentialities to provide relevant recommendations. Moreover, it allows the possibility to implement effective techniques to completely reach explainability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Holmes, R., Wayne, B., Author, M., Fadel, A.C.: Artificial intelligence in education. J (2019). Center for Curriculum Redesign, Boston. https://doi.org/10.1109/MIS.2016.93

  2. Timms, M.J.: Letting artificial intelligence in education out of the box: educational cobots and smart classrooms. Int. J. Artif. Intell. Educ. 26(2), 701–712 (2016). https://doi.org/10.1007/s40593-016-0095-y

    Article  Google Scholar 

  3. Dondi, R., Mauri, G., Zoppis, I.: Clique editing to support case versus control discrimination. In: Czarnowski, I., Caballero, A.M., Howlett, R.J., Jain, L.C. (eds.) Intelligent Decision Technologies 2016. SIST, vol. 56, pp. 27–36. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-39630-9_3

    Chapter  Google Scholar 

  4. Zoppis, I., Dondi, R., Coppetti, D., Beltramo, A., Mauri, G.: Distributed heuristics for optimizing cohesive groups: a support for clinical patient engagement in social network analysis. In: 2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP) (2018). https://doi.org/10.1109/PDP44162.2018

  5. Zoppis, I., Manzoni, S., Mauri, G.: A computational model for promoting targeted communication and supplying social explainable recommendations. In: 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS) (2019). https://doi.org/10.1109/CBMS.2019.00090

  6. Fox, M., Long, D., Magazzeni, D.: Explainable planning (2017)

    Google Scholar 

  7. Bonhard, P., Sasse, M.A.: ‘Knowing me, knowing you’-using profiles and social networking to improve recommender systems. BT Technol. J. (2006). https://doi.org/10.1007/s10550-006-0080-3

    Article  Google Scholar 

  8. Gupta, P., Goel, A., Lin, J., Sharma, A., Wang, D., Zadeh, R.: WTF: the who to follow service at Twitter. In: Proceedings of the 22nd International Conference on World Wide Web (2013). https://doi.org/10.1145/2488388.2488433

  9. Zhou, X., Xu, Y., Li, Y., Josang, A., Cox, C.: The state-of-the-art in personalized recommender systems for social networking. Artif. Intell. Rev. 37, 119–132 (2012). https://doi.org/10.1007/s10462-011-9222-1

    Article  Google Scholar 

  10. Zhang, Y., Chen, X.: Explainable recommendation: a survey and new perspectives (2018). https://doi.org/10.1561/1500000066

  11. Boaz Lee, J., Rossi, R.A., Kim, S., Ahmed, N.K., Koh, E.: Attention models in graphs: a survey. arXiv preprint arXiv:1807.07984 (2018). https://doi.org/10.1145/3363574

  12. Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017). https://doi.org/10.17863/CAM.48429

  13. Sharma, A., Cosley, D.: Do social explanations work? Studying and modeling the effects of social explanations in recommender systems. In: Proceedings of the 22nd International Conference on World Wide Web (2013). https://doi.org/10.1145/2488388.2488487

  14. Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Ruggieri, S., Turini, F.: Meaningful explanations of Black Box AI decision systems. In: Proceedings of the AAAI Conference on Artificial Intelligence (2019). https://doi.org/10.1609/aaai.v33i01.33018001

  15. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018). https://doi.org/10.1109/ACCESS.2018.2870052

    Article  Google Scholar 

  16. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (2018). https://doi.org/10.1145/3236009

  17. Apolloni, B., Bassis, S., Mesiti, M., Valtolina, S., Epifania, F.: A rule based recommender system. In: Bassis, S., Esposito, A., Morabito, F.C., Pasero, E. (eds.) Advances in Neural Networks. SIST, vol. 54, pp. 87–96. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-33747-0_9

    Chapter  Google Scholar 

  18. Park, H., Jeon, H., Kim, J., Ahn, B., Kang, U.: UniWalk: explainable and accurate recommendation for rating and network data (2017)

    Google Scholar 

  19. Gori, M., Monfardini, G., Scarselli, F.: A new model for learning in graph domains. In: Proceedings of the 2005 IEEE International Joint Conference on Neural Networks (2005). https://doi.org/10.1109/IJCNN.2005.1555942

  20. Zoppis, I., Dondi, R., Manzoni, S., Mauri, G., Marconi, L., Epifania, F.: Optimized social explanation for educational platforms (2019). https://doi.org/10.5220/0007749500850091

  21. Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model (2008). https://doi.org/10.1109/TNN.2008.2005605

  22. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv 1409.0473 (2014)

    Google Scholar 

  23. Goyal, P., Ferrara, E.: Graph embedding techniques, applications, and performance: a survey. Knowl. Based Syst. (2018). https://doi.org/10.1016/j.knosys.2018.03.022

  24. Mullenbach, J., Wiegreffe, S., Duke, J., Sun, J., Eisenstein, J.: Explainable prediction of medical codes from clinical text (2018)

    Google Scholar 

  25. Wang, N., Chen, M., Subbalakshmi, K.P.: Explainable CNN attention networks (C-attention network) for automated detection of Alzheimerś disease (2020)

    Google Scholar 

  26. Chen, C., Zhang, M., Liu, Y., Ma, S.: Neural attentional rating regression with review-level explanations. In: International World Wide Web Conferences Steering Committee (2018). https://doi.org/10.1145/3178876.3186070

  27. Mohankumar, A., Nema, P., Narasimhan, S., Khapra, M., Srinivasan, B., Ravindran, B.: Towards transparent and explainable attention models (2020). https://doi.org/10.18653/v1/2020.acl-main.387

  28. Liu, P., Zhang, L., Gulla, J.A.: Dynamic attention-based explainable recommendation with textual and visual fusion. Inf. Process. Manag. (2019). https://doi.org/10.1016/j.ipm.2019.102099

  29. Zoppis, I., Manzoni, S., Mauri, G., Aragon, R.A.M., Marconi, L., Epifania, F.: Attentional neural mechanisms for social recommendations in educational platforms. In: Proceedings of the 12th International Conference on Computer Supported Education - Volume 1 CSEDU (2020). https://doi.org/10.5220/0009568901110117

  30. Dondi, R., Mauri, G., Zoppis, I.: On the tractability of finding disjoint clubs in a network. Theor. Comput. Sci. 777, 243–251 (2019)

    Article  MathSciNet  Google Scholar 

  31. Chen, X., et al.: Personalized fashion recommendation with visual explanations based on multimodal attention network: towards visually explainable recommendation (2019). https://doi.org/10.1145/3331184.3331254

  32. Chen, J., Zhuang, F., Hong, X., Ao, X., Xie, X., He, Q.: Attention-driven factor model for explainable personalized recommendation (2018). https://doi.org/10.1145/3209978.3210083

  33. Liu, P., Zhang, L., Gulla, J.A.: Dynamic attention-based explainable recommendation with textual and visual fusion. Inf. Process. Manag. (2020). https://doi.org/10.1016/j.ipm.2019.102099

  34. Chen, X., Zhang, Y., Qin, Z.: Dynamic explainable recommendation based on neural attentive models. In: Proceedings of the AAAI Conference on Artificial Intelligence (2019). https://doi.org/10.1609/aaai.v33i01.330153

  35. Liu, Y.-Y., Yang, B., Pei, H.-B., Huang, J.: Neural explainable recommender model based on attributes and reviews. J. Comput. Sci. Technol. 35(6), 1446–1460 (2020). https://doi.org/10.1007/s11390-020-0152-8

    Article  Google Scholar 

  36. Zhang, H., Huang, T., Lv, Z., Liu, S., Yang, H.: MOOCRC: a highly accurate resource recommendation model for use in MOOC environments. Mob. Netw. Appl. 24(1), 34–46 (2018). https://doi.org/10.1007/s11036-018-1131-y

    Article  Google Scholar 

  37. Chen, X., Zhang, Y., Qin, Z.: Dynamic explainable recommendation based on neural attentive models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 53–60 (2019). https://doi.org/10.1609/aaai.v33i01.330153

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luca Marconi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Marconi, L., Aragon, R.A.M., Zoppis, I., Manzoni, S., Mauri, G., Epifania, F. (2021). Explainable Attentional Neural Recommendations for Personalized Social Learning. In: Baldoni, M., Bandini, S. (eds) AIxIA 2020 – Advances in Artificial Intelligence. AIxIA 2020. Lecture Notes in Computer Science(), vol 12414. Springer, Cham. https://doi.org/10.1007/978-3-030-77091-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77091-4_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77090-7

  • Online ISBN: 978-3-030-77091-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics