Skip to main content
Log in

TLVANE: a two-level variation model for attributed network embedding

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Network embedding aims to learn low-dimensional representations for nodes in social networks, which can serve many applications, such as node classification, link prediction and visualization. Most of network embedding methods focus on learning the representations solely from the topological structure. Recently, attributed network embedding, which utilizes both the topological structure and node content to jointly learn latent representations, becomes a hot topic. However, previous studies obtain the joint representations by directly concatenating the one from each aspect, which may lose the correlations between the topological structure and node content. In this paper, we propose a new attributed network embedding method, TLVANE, which can address the drawback by exploiting the deep variational autoencoders (VAEs). Particularly, a two-level VAE model is built, where the first-level accounts for the joint representations while the second for the embeddings of each aspect. Extensive experiments on three real-world datasets have been conducted, and the results demonstrate the superiority of the proposed method against state-of-the-art competitors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. http://citeseerx.ist.psu.edu/.

  2. http://arnetminer.org/citation (V4 version is used).

  3. http://linqs.cs.umd.edu/projects//projects/lbc/index.html.

References

  1. Belkin M, Niyogi P (2002) Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Advances in neural information processing systems, pp 585–591

  2. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396

    Article  Google Scholar 

  3. Perozzi B, Al-Rfou R, Skiena S (2014) Deepwalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 701–710

  4. Cao S, Lu W, Xu Q (2015) Grarep: learning graph representations with global structuralinformation. In: Proceedings of the 24th ACM international on conference on information and knowledge management. ACM, pp 891–900

  5. Tang J, Qu M, Wang M, Zhang M, Yan J, Mei Q (2015) Line: large-scale information network embedding. In: Proceedings of the 24th international conference on world wide web. International world wide web conferences steering committee. pp 1067–1077

  6. Cao S, Lu W, Xu Q (2016) Deep neural networks for learning graph representations. In: The 30th AAAI conference on artificial intelligence, pp 1145–1152

  7. Zhu S, Yu K, Chi Y, Gong Y (2007) Combining content and link for classification using matrix factorization. In: Proceedings of the 30th annual international ACM SIGIR conference on research and development in information retrieval. ACM, pp 487–494

  8. Li J, Zhu J, Zhang B (2016) Discriminative deep random walk for network classification. In: The 54th Annual meeting of the association for computational linguistics

  9. Ng AY, Jordan MI, Weiss Y (2002) On spectral clustering: analysis and an algorithm. In: Advances in neural information processing systems (NIPS), pp 849–856

  10. Huang Z, Ye Y, Li X, Liu F, Chen H (2017) Joint weighted nonnegative matrix factorization for mining attributedgraphs. In: Pacific-Asia conference on knowledge discovery and data mining. Springer, New York, pp 368–380

  11. Gao S, Denoyer L, Gallinari P (2011) Temporal link prediction by integrating content and structure information. In: Proceedings of the 20th ACM international conference on information and knowledge management. ACM, pp 1169–1174

  12. Zhu L, Guo D, Yin J, Ver Steeg G, Galstyan A (2016) Scalable temporal latent space inference for link prediction in dynamic social networks. IEEE Trans Knowl Data Eng 28(10):2765–2777

    Article  Google Scholar 

  13. Tang J, Liu J, Zhang M, Mei Q (2016) Visualizing large-scale and high-dimensional data. In: Proceedings of the 25th international conference on world wide web. International world wide web conferences steering committee, pp 287–297

  14. Qi G-J, Aggarwal C, Tian Q, Ji H, Huang T (2012) Exploring context and content links in social media: a latent space method. IEEE Trans Pattern Anal Mach Intell 34(5):850–862

    Article  Google Scholar 

  15. Wang S, Li X, Ye Y, Huang X, Li Y (2018) Multi-attribute and relational learning via hypergraph regularized generative model. Neurocomputing 274:115–124

    Article  Google Scholar 

  16. Luo D, Nie F, Huang H, Ding CH (2011) Cauchy graph embedding. In: Proceedings of the 28th international conference on machine learning, pp 553–560

  17. Wang D, Cui P, Zhu W (2016) Structural deep network embedding. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 1225–1234

  18. Li H, Wang H, Yang Z, Liu H (2017) Effective representing of information network by variational autoencoder. In: International joint conference on artificial intelligence, pp 2103–2109

  19. Yang C, Liu Z, Zhao D, Sun M, Chang EY (2015) Network representation learning with rich text information. In: International joint conference on artificial intelligence, pp 2111–2117

  20. Pan S, Jia W, Zhu X, Zhang C, Wang Y (2016) Tri-party deep network representation. Network 11(9):12

    Google Scholar 

  21. Huang X, Li J, Xia H (2017) Accelerated attributed network embedding. In: Proceedings of the 2017 SIAM international conference on data mining

  22. Li F, Ye Y, Tian Z, Zhang X (2018) CPU versus GPU: which can perform matrix computation faster–performance comparison for basic linear algebra subprograms. Neural Comput Appl. https://doi.org/10.1007/s00521-018-3354-z

    Article  Google Scholar 

  23. Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114

  24. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems, pp 3111–3119

  25. Grover A, Leskovec J (2016) node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 855–864

  26. Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: Proceedings of the 31st international conference on machine learning, pp 1188–1196

  27. Zhang H, Wang S, Mingbo Z, Xu X, Ye Y (2018) Locality reconstruction models for book representation. IEEE Trans Knowl Data Eng 30(10):1873–1886

    Article  Google Scholar 

  28. Zhang H, Wang S, Xu X, Chow TWS, Jonathan Wu QM (2018) Tree2vector: learning a vectorial representation for tree-structured data. IEEE Trans Neural Netw Learn Syst 29(11):5304–5318

    Article  MathSciNet  Google Scholar 

  29. Fan R-E, Chang K-W, Hsieh C-J, Wang X-R, Lin C-J (2008) Liblinear: a library for large linear classification. J Mach Learn Res 9:1871–1874

    MATH  Google Scholar 

  30. Friedman M (1939) The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Publ Am Stat Assoc 32(200):675–701

    Article  Google Scholar 

  31. Nemenyi P (1962) Distribution-free multiple comparisons. Int Biom Soc 18(2):263

    Google Scholar 

Download references

Acknowledgements

This research was supported in part by NSFC under Grant No. 61572158 and No. 61602132, and Shenzhen Science and Technology Program under Grant No. JCYJ20160330163900579, No. JCYJ20170413105929681 and No. JCYJ20170811160212033.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunming Ye.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, Z., Li, X., Ye, Y. et al. TLVANE: a two-level variation model for attributed network embedding. Neural Comput & Applic 32, 4835–4847 (2020). https://doi.org/10.1007/s00521-018-3875-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-3875-5

Keywords

Navigation