Skip to main content

Multi-layer Attention Social Recommendation System Based on Deep Reinforcement Learning

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14119))

  • 397 Accesses

Abstract

The recommendation system based on deep reinforcement learning recommends interesting content to users through the interaction of recommendation agents and usersThis is to inform you that corresponding author has been identified as per the information available in the Copyright form.. However, most recommender systems based on deep reinforcement learning often face two limitations: (1) sparse user feedback data that makes the recommendation agent unable to accurately capture the user’s dynamic preferences; and (2) users and items are isolated from each other due to the limitation of unstructured representation. To address this situation, this paper proposes a multi-layer attention social recommendation method based on deep reinforcement learning by fusing social network and user-item bipartite graph to form a heterogeneous information network. This method uses subgraphs in the heterogeneous information network to structurally represent users and items through a variant of the graph attention network. By doing so, users and items can perceive neighborhood information in the heterogeneous information network and enhance the correlation between nodes. At the same time, it can also avoid the repeated transmission of irrelevant nodes. Additionally, the attention mechanism is adopted in the graph attention network to reduce the influence of noisy nodes in the heterogeneous information network. Corresponding weights are given to the neighborhood information, and nodes with noise are given smaller weights. Then, the external attention mechanism is used to adjust the weight of historical items in the state information, which realizes the selective attention of different users to different items and generates the user’s preference representation. Finally, the deep reinforcement learning method is used to simulate the interaction between the recommendation system and the user. This method adapts to the dynamic changes of user preferences and considers the long-term rewards brought by the recommended items. Experimental results show that this method can alleviate the above two problems and provide users with more accurate recommended items.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Xie, R., Zhang, S., Wang, R., Xia, F., Lin, L.: Hierarchical reinforcement learning for integrated recommendation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 4521–4528 (2021)

    Google Scholar 

  2. Lei, Y., Pei, H., Yan, H., Li, W.: Reinforcement learning based recommendation with graph convolutional q-network. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1757–1760 (2020)

    Google Scholar 

  3. Liu, F., et al.: State representation modeling for deep reinforcement learning based recommendation. Knowl.-Based Syst. 205, 106170 (2020)

    Article  Google Scholar 

  4. Xin, X., Karatzoglou, A., Arapakis, I., Jose, J.M.: Self-supervised reinforcement learning for recommender systems. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 931–940 (2020)

    Google Scholar 

  5. Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: Bpr: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012)

  6. He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.-S.: Neural collaborative filtering. In: Proceedings of the 26th International Conference on World Wide Web, pp. 173–182 (2017)

    Google Scholar 

  7. Liu, F., et al.: Deep reinforcement learning based recommendation with explicit user-item interactions modeling. arXiv preprint arXiv:1810.12027 (2018)

  8. Zheng, G., e al.: DRN: a deep reinforcement learning framework for news recommendation. In: Proceedings of the 2018 World Wide Web Conference, pp. 167–176 (2018)

    Google Scholar 

  9. Fan, W., et al.: Graph neural networks for social recommendation. In: The World Wide Web Conference, pp. 417–426 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangrong Tong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Y., Tong, X. (2023). Multi-layer Attention Social Recommendation System Based on Deep Reinforcement Learning. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14119. Springer, Cham. https://doi.org/10.1007/978-3-031-40289-0_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40289-0_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40288-3

  • Online ISBN: 978-3-031-40289-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics