Skip to main content

Hierarchical Policy Network with Multi-agent for Knowledge Graph Reasoning Based on Reinforcement Learning

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12815))

Abstract

Multi-hop reasoning on Knowledge Graphs (KGs) aims at inferring the triplets that not in the KGs to address the KGs incompleteness problem. Reinforcement learning (RL) methods, exploiting an agent that takes incremental steps by sampling a relation and entity (called an action) to extend its path, has yielded superior performance. Existing RL methods, however, cannot gracefully handle the large-scale action space problem in KGs, causing dimensional disaster. Hierarchical reinforcement learning is dedicated to decomposing a complex reinforcement learning problem into several sub-problems and solving them separately, which can achieve better results than directly solving the entire problem. Building on this, in this paper, we propose to divide the action selection process in each step into three stages: 1) selecting a pre-clustered relation cluster, 2) selecting a relation in the chosen relation cluster, and 3) selecting the tail entity of the relation selected by the previous stage. Each stage has an agent to determine the selection, which formulated a hierarchical policy network. Furthermore, for the environment representation of KGs, the existing methods simply concatenate the different parts (the embedding of start entity, current entity and query relation), which ignore the potential connections between different parts, so we propose a convolutional neural network structure based on inception network to better extract features of the environment and enhance the interaction across different parts of the environment. The experimental results on three datasets demonstrate the effectiveness of our proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, pp. 2787–2795 (2013)

    Google Scholar 

  2. Das, R., et al.: Go for a walk and arrive at the answer: reasoning over paths in knowledge bases using reinforcement learning. arXiv preprint arXiv:1711.05851 (2017)

  3. Dettmers, T., Minervini, P., Stenetorp, P., Riedel, S.: Convolutional 2d knowledge graph embeddings. arXiv preprint arXiv:1707.01476 (2017)

  4. Eysenbach, B., Gupta, A., Ibarz, J., Levine, S.: Diversity is all you need: learning skills without a reward function. arXiv abs/1802.06070 (2019)

  5. Gai, K., Qiu, M.: Optimal resource allocation using reinforcement learning for IoT content-centric services. Appl. Soft Comput. 70, 12–21 (2018)

    Article  Google Scholar 

  6. Gai, K., Qiu, M.: Reinforcement learning-based content-centric services in mobile sensing. IEEE Netw. 32(4), 34–39 (2018)

    Article  Google Scholar 

  7. Kulkarni, T.D., Narasimhan, K., Saeedi, A., Tenenbaum, J.: Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic motivation. arXiv abs/1604.06057 (2016)

  8. Lao, N., Mitchell, T., Cohen, W.: Random walk inference and learning in a large scale knowledge base. In: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pp. 529–539 (2011)

    Google Scholar 

  9. Li, A.C., Florensa, C., Clavera, I., Abbeel, P.: Sub-policy adaptation for hierarchical reinforcement learning. arXiv preprint arXiv:1906.05862 (2019)

  10. Lin, X.V., Socher, R., Xiong, C.: Multi-hop knowledge graph reasoning with reward shaping. arXiv preprint arXiv:1808.10568 (2018)

  11. Nachum, O., Gu, S., Lee, H., Levine, S.: Data-efficient hierarchical reinforcement learning. arXiv abs/1805.08296 (2018)

  12. Trouillon, T., Welbl, J., Riedel, S., Gaussier, É., Bouchard, G.: Complex embeddings for simple link prediction. In: International Conference on Machine Learning (ICML) (2016)

    Google Scholar 

  13. Wang, W.Y., Cohen, W.W.: Learning first-order logic embeddings via matrix factorization. In: IJCAI, pp. 2132–2138 (2016)

    Google Scholar 

  14. Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8, 229–256 (2004)

    MATH  Google Scholar 

  15. Xie, R., Liu, Z., Sun, M.: Representation learning of knowledge graphs with hierarchical types. In: IJCAI (2016)

    Google Scholar 

  16. Xie, Z., Zhou, G., Liu, J., Huang, X.: ReinceptionE: relation-aware inception network with joint local-global structural information for knowledge graph embedding. In: ACL (2020)

    Google Scholar 

  17. Xiong, W., Hoang, T., Wang, W.Y.: DeepPath: a reinforcement learning method for knowledge graph reasoning. arXiv preprint arXiv:1707.06690 (2017)

  18. Yang, B., Yih, W., He, X., Gao, J., Deng, L.: Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 (2014)

  19. Yang, F., Yang, Z., Cohen, W.W.: Differentiable learning of logical rules for knowledge base reasoning. In: Advances in Neural Information Processing Systems, pp. 2319–2328 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingming Zheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zheng, M., Zhou, Y., Cui, Q. (2021). Hierarchical Policy Network with Multi-agent for Knowledge Graph Reasoning Based on Reinforcement Learning. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, SY. (eds) Knowledge Science, Engineering and Management. KSEM 2021. Lecture Notes in Computer Science(), vol 12815. Springer, Cham. https://doi.org/10.1007/978-3-030-82136-4_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-82136-4_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-82135-7

  • Online ISBN: 978-3-030-82136-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics