Skip to main content
Log in

Node embedding with capsule generation-embedding network

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Achieving interpretable embedding of real network has a significant impact on network analysis tasks. However, majority of node embedding-based methods seldom consider the rationality and interpretability of node embedding. Although graph attention networks-based approaches have been employed to improve the interpretability of node embedding, they are implicitly specifying different weights to different nodes in a neighborhood. In this study, we present node embedding with capsule generation-embedding network(CapsGE), which is a novel capsule network-based network architecture, and uses node density based on the definition of uncertainty of node community belongings to explicitly assign different weights to different nodes in a neighborhood. In addition, this model uses the proposed cognitive reasoning mechanism for the weighted features to achieve rational and interpretable embedding of nodes. The performance of the method is assessed on node classification task. The experimental results demonstrate its advantages over other methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Zhou J, Liu L, Wei W, Fan J (2022) Network representation learning: from preprocessing, feature extraction to node embedding. ACM Comput Surv 55(2):1–35

    Article  Google Scholar 

  2. Palomares I, Porcel C, Pizzato L, Guy I, Herrera-Viedma E (2021) Reciprocal recommender systems: analysis of state-of-art literature, challenges and opportunities towards social recommendation. Inf Fusion 69:103–127

    Article  Google Scholar 

  3. Wu Z, Pan S, Chen F, Long G, Zhang C, Philip SY (2020) A comprehensive survey on graph neural networks. IEEE Trans Neural Netw Learn Syst 32(1):4–24

    Article  MathSciNet  Google Scholar 

  4. Muzio G, O’Bray L, Borgwardt K (2021) Biological network analysis with deep learning. Brief Bioinform 22(2):1515–1530

    Article  Google Scholar 

  5. Hoffman SC, Chenthamarakshan V, Wadhawan K, Chen PY, Das P (2022) Optimizing molecules using efficient queries from property evaluations. Nat Mach Intell 4(1):21–31

    Article  Google Scholar 

  6. Wei W, Zhang Q, Liu L (2020) Bitcoin transaction forecasting with deep network representation learning. IEEE Trans Emerg Top Comput 9(3):1359–1371

    Article  Google Scholar 

  7. Perozzi B, Al-Rfou R, Deepwalk Skiena S (2014) Online learning of social representation. Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp 701–710

    Chapter  Google Scholar 

  8. Grover A, Leskovec J (2016) Node2vec: scalable feature learning for networks. Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, San Francisco. ACM, New York, pp 855–864

    Chapter  Google Scholar 

  9. Ribeiro LFR, Saverese PHP, Figueiredo DR (2017) Struc2vec: Learning node representations from structural identit. Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, Halifax, Canada. ACM, New York, pp 385–394

    Google Scholar 

  10. Huang X, Li J, Hu X (2017) Label informed attributed network embedding. Rroceedings of the tenth ACM international conference on web search and data mining, Cambridge, United Kingdom. ACM, New York, pp 731–739

    Google Scholar 

  11. Yang C, Liu Z, Zhao D, Sun M, Chang E (2015) Network representation learning with rich text information. Proceedings of the twenty-fourth international joint conference on artificial intelligence; Buenos Aires, Argentina. AAAI Press, Menlo Park, pp 2111–2117

    Google Scholar 

  12. Velikovi P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y (2018) Graph attention networks. In: Proceedings of the 6th International Conf. on Learning Representation (ICLR), Vancouver, Canada, pp 44–51

  13. Xie L, Huang H, Du Q (2022) A hierarchical generative embedding model for influence maximization in attributed social networks. Appl Sci 12(3):1321

    Article  Google Scholar 

  14. Yang Z, Cohen W, Salakhudinov R (2016) Revisiting semi-supervised learning with graph embedding. In: Proceedings of the 33nd International Conference on Machine Learning(ICML), New York City, NY, USA, pp 40–48

  15. Defferrard M, Bresson X, Vandergheynst P (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In: Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems (NeurIPS), Barcelona, Spain, pp 3837–3845

  16. Monti F, Boscaini D, Masci J, Rodola E, Svoboda J, Bronstein MM (2017) Geometric deep learning on graphs and manifolds using mixture model CNNS. Processing of the IEEE conference on computer vision and pattern recognition (CVPR). IEEE Computer Society, New York, pp 5115–5124

    Google Scholar 

  17. Li C, Wang S, Yang D, Li Z, Yang Y, Zhang X, Zhou J (2020) Graph attention auto-encoders. 32nd IEEE international conference on tools with artificial intelligence(ICTAI); Baltimore, USA. IEEE, New York, pp 163–179

    Google Scholar 

  18. Cai B, Zeng L, Wang Y, Li H, Hu Y (2019) Community detection method based on node density, degree centrality, and K-means clustering in complex network. Entropy 21(12):1145

    Article  MathSciNet  Google Scholar 

  19. Xu W, Yuan K, Li W, Ding W (2022) An emerging fuzzy feature selection method using composite entropy-based uncertainty measure and data distribution. IEEE Trans Emerg Top Comput Intell 1:1–13

    Google Scholar 

  20. Hinton GE, Krizhevsky A, Wang SD (2011) Transforming auto-encoders. Proceedings of the international conference on artificial neural networks. Springer, Berlin, pp 44–51

    Google Scholar 

  21. Patrick MK, Adekoya AF, Mighty AA, Edward BY (2022) Capsule networks-a survey. J King Saud Univ-Comput Inf Sci 34(1):1295–1310

    Google Scholar 

  22. Tang J, Qu M, Wang M, Zhang M, Yan J, Mei Q (2015) Line: large-scale information network embedding. Processing of the 24th international conference on world wide web, Florence, Italy. ACM, New York, pp 1067–1077

    Chapter  Google Scholar 

  23. Li J, Zhu J, Zhang B (2016) Discriminative deep random walk for network classification. Proceedings of the 54th annual meeting of the association for computational linguistics, Berlin, Germany. The Association for Computer Linguistics, Berlin, pp 1004–1013

    Google Scholar 

  24. Chen J, Zhang Q, Huang X (2016) Incorporate group information to enhance network embedding. Rroceedings of the 25th ACM international on conference on information and knowledge management (CIKM), Indianapolis, USA. ACM, New York, pp 1901–1904

    Google Scholar 

  25. Li C, Li Z, Wang S, Yang Y, Zhang X, Zhou J (2017) Semi-supervised network embedding. Proceedings of the database systems for advanced applications-22nd international conference, Suzhou, China. Springer, Berlin, pp 131–147

    Google Scholar 

  26. Xu W, Li W (2016) Granular computing approach to two-way learning based on formal concept analysis in fuzzy datasets. IEEE Trans Cybern 46(2):366–379

    Article  MathSciNet  Google Scholar 

  27. Qiu J, Dong Y, Ma H, Li J, Wang K, Tang J (2018) Network embedding as matrix factorization: unifying deepwalk, line, pte, and node2vec. Proceedings of the 11th ACM international conference on web search and data mining; Marina Del Rey, CA, USA. ACM, New York, pp 459–467

    Google Scholar 

  28. Wang D, Cui P, Zhu W (2016) Structural deep network embedding. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp 1225–1234

    Chapter  Google Scholar 

  29. Cao S, Lu W, Xu Q (2016) Deep neural networks for learning graph representations. Proceedings of the13th AAAI conference on artificial intelligence; Phoenix, Arizona, USA. AAAI Press, Menlo Park, pp 1145–152

    Google Scholar 

  30. Wang C, Pan S, Long G, Zhu X, Jiang J (2017) Mgae: marginalized graph auto-encoder for graph clustering. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp 889–898

  31. Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations (ICLR), Toulon, France, pp 1–14

  32. Hamilton W, Ying Z, Leskovec J (2017) Inductive representation learning on large graphs. In: Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems (NeurIPS); Long Beach, USA, pp 1024–1034

  33. Nguyen DQ, Nguyen TD, Phung DA (2020) A self-attention network based node embedding model. Proceedings of the machine learning and knowledge discovery in databases-european conference, Ghent, Belgium. Springer, Berlin, pp 364–377

    Google Scholar 

  34. Velickovic P, Fedus W, Hamilton WL, Li P, Bengio Y, Hjelm RD (2019) Deep graph infomax. ICLR (Poster) 2(3):4

    Google Scholar 

  35. Hasanzadeh A, Hajiramezanali E, Narayanan K, Duffield N, Zhou M, Qian X (2019) Semi-implicit graph variational auto-encoders. In: Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada, pp 10711–10722

  36. Wang H, Wang J, Wang J, Zhao M, Zhang W, Zhang F, Guo M (2018) GraphGAN: graph representation learning with generative adversarial nets. Proceedings of the thirty-second AAAI conference on artificial intelligence, Louisiana, USA. AAAI Press, Menlo Park, pp 2508–2515

    Google Scholar 

  37. Wang H, Wang J, Wang J, Zhao M, Zhang W, Zhang F, Guo M (2019) Learning graph representation with generative adversarial nets. IEEE Trans Knowl Data Eng 33(8):3090–3103

    Article  Google Scholar 

  38. Dai Q, Li Q, Tang J, Wang D (2018) Adversarial network embedding. Proceedings of the thirty-second AAAI conference on artificial intelligence; Louisiana, USA. AAAI Press, Menlo Park, pp 2167–2174

    Google Scholar 

  39. Zhu S, Li J, Peng H, Wang S, He L (2021) Adversarial directed graph embedding. Proceedings of the thirty-fifth AAAI conference on artificial intelligence. AAAI Press, Menlo Park, pp 4741–4748

    Google Scholar 

  40. Li W, Zhou H, Xu W, Wang XZ, Pedrycz W (2022) Interval dominance-based feature selection for interval-valued ordered data. IEEE Trans Neural Netw Learn Syst 1:1–15

    Google Scholar 

  41. Xu W, Yuan KY, Li W (2022) Dynamic updating approximations of local generalized multigranulation neighborhood rough set. Appl Intell 52(8):9148–9173

    Article  Google Scholar 

  42. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley B, Ozair S, Courville A, Bengio Y (2014) Adversarial directed graph embedding. In: Processing of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems, Quebec, Canada, pp 2672–2680

  43. Nguyen DQ, Nguyen TD, Nguyen DQ, Phung D (2020) A capsule network-based model for learning node embedding. Processing of the 29th ACM international conference on information and knowledge management; virtual event, Ireland. ACM, New York, pp 3313–3316

    Google Scholar 

  44. Manoharan JS (2021) Capsule network algorithm for performance optimization of text classification. J Soft Comput Paradigm (JSCP) 3(1):1–9

    Article  Google Scholar 

  45. Goldani MH, Momtazi S, Safabakhsh R (2021) Detecting fake news with capsule neural networks. Appl Soft Comput 101:106991

    Article  Google Scholar 

  46. Sabour S, Frosst N, Hinton GE (2017) Dynamic routing between capsules. In: Processing of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems (NeurIPS); Long Beach, USA, pp 3856–3866

  47. Hinton GE, Sabour S, Frosst N (2018) Matrix capsules with EM routing. In: Proceedings of the 6th International Conference on Learning Representations(ICLR); Vancouver, BC, Canada, pp 1–15

  48. Sen P, Namata G, Bilgic M, Getoor L, Galligher B, Eliassi-Rad T (2008) Collective classification in network data. AI Mag 19(3):93–93

    Google Scholar 

  49. Jean S, Cho K, Memisevic R, Bengio Y (2015) On using very large target vocabulary for neural machine translation. Assoc Comput Linguist (ACL) 1(1):1–10

    Google Scholar 

  50. Van der Maaten L, Hinton GE (2008) Visualizing data using t-SNE. J Mach Learn Res 9(11):2580–2605

    MATH  Google Scholar 

Download references

Funding

This research was supported by Hebei Natural Science Foundation (F2021205014), funded by Science and Technology Project of Hebei Education Department (ZD2022139), supported by the Central Guidance on Local Science and Technology Development Fund of Hebei Province (226Z1808G), funded by The Introduction of Overseas Students in Hebei Province (C20200340).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinghong Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, J., Zhang, D., Wei, J. et al. Node embedding with capsule generation-embedding network. Int. J. Mach. Learn. & Cyber. 14, 2511–2528 (2023). https://doi.org/10.1007/s13042-023-01779-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-023-01779-9

Keywords

Navigation