Abstract
Learning network representation, which aims to capture various properties of networks in low-dimensional feature space, has recently attracted significant attention. Almost all existing static network embedding and dynamic network embedding methods that employ deep models adopt dense structures. Deep models can ensure that the network embedding achieves a good effect on the task (link prediction, network reconstruction, etc.); however, all works of this kind ignore the high complexity of the deep model training process. In this paper, we propose an embedding method that learns dynamic network embedding by using a sparse deep model. The general idea underpinning our approach involves greatly reducing the number of connections between layers in the deep model. Moreover, the sparse structure of the deep model evolved during the training process to achieve the purpose of fitting the network data. Experimental results on simulated benchmark networks and real-world networks prove that, compared with existing network embedding methods utilizing dense structures, our method is able to greatly reduce the number of training weights, while minimally affecting or sometimes even improving the effect of network tasks.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Anwar, S., Hwang, K., Sung, W.: Structured pruning of deep convolutional neural networks. ACM J. Emerg. Technol. Comput. Syst. 13(3), 32 (2017)
Girvan, M., Newman, M.E.: Community structure in social and biological networks. Proc. Natl. Acad. Sci. 99(12), 7821–7826 (2002)
Goyal, P., Chhetri, S.R., Canedo, A.: Dyngraph2vec: capturing network dynamics using dynamic graph representation learning. Knowl. Based Syst. 187, 104816 (2020)
Goyal, P., Kamra, N., He, X., Liu, Y.: Dyngem: deep embedding method for dynamic graphs. In: 3rd IJCAI International Workshop on Representation Learning for Graphs. IJCAI. Melbourne, VIC, Australia (2017)
Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Annual Conference on Neural Information Processing Systems 2015, pp. 1135–1143. NIPS, Montreal, Quebec, Canada (2015)
Jiao, P., et al.: Temporal network embedding for link prediction via vae joint attention mechanism. IEEE Transactions on Neural Networks and Learning Systems (2021)
Jin, X., Yuan, X., Feng, J., Yan, S.: Training skinny deep neural networks with iterative hard thresholding methods. arXiv preprint arXiv:1607.05423 (2016)
Lancichinetti, A., Fortunato, S.: Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities. Phys. Rev. E Statist. Nonlinear Soft Matter Phys. 80(1), 016118 (2009)
Lee, J., Perkins, D.: A simulated annealing algorithm with a dual perturbation method for clustering. Pattern Recogn. 112, 107713 (2020)
Li, T., Zhang, J., Philip, S.Y., Zhang, Y., Yan, Y.: Deep dynamic network embedding for link prediction. IEEE Access 6, 29219–29230 (2018)
Liu, Q., Long, C., Zhang, J., Xu, M., Lv, P.: Triatne: tipartite adversarial training for network embeddings. IEEE Trans. Cybern. 52, 9634–9645 (2021)
Ma, L., Li, J., Lin, Q., Gong, M., Coello, C.A.C., Ming, Z.: Cost-aware robust control of signed networks by using a memetic algorithm. IEEE Trans. Cybern. 50(10), 4430–4443 (2020)
Miikkulainen, R., et al.: Evolving deep neural networks. In: Artificial Intelligence in the Age of Neural Networks and Brain Computing, pp. 293–312. Elsevier (2019)
Mocanu, D.C., et al.: Evolutionary training of sparse artificial neural networks: a network science perspective. arXiv preprint arXiv:1707.04780 (2017)
Nasiri, E., Berahmand, K., Rostami, M., Dabiri, M.: A novel link prediction algorithm for protein-protein interaction networks by attributed graph embedding. Comput. Biol. Med. 137, 104772 (2021)
Newman, M.E., Girvan, M.: Finding and evaluating community structure in networks. Phys. Rev. E Statist. Nonlinear Soft Matter Phys. 69(2), 026113 (2004)
Nguyen, G.H., Lee, J.B., Rossi, R.A., Ahmed, N.K., Koh, E., Kim, S.: Continuous-time dynamic network embeddings. In: The Web Conference 2018, pp. 969–976. Lyons, FRANCE (2018)
Pareja, A., et al: Evolvegcn: evolving graph convolutional networks for dynamic graphs. In: 34th AAAI Conference on Artificial Intelligence, pp. 5363–5370. AAAI, New York, NY, USA (2020)
Park, P.S., Blumenstock, J.E., Macy, M.W.: The strength of long-range ties in population-scale social networks. Science 362(6421), 1410–1413 (2018)
Rossi, R.A., Ahmed, N.K.: The network data repository with interactive graph analytics and visualization. In: 29th AAAI Conference on Artificial Intelligence. AAAI, Austin, Texas, USA (2015). http://networkrepository.com
Santolini, M., Barabási, A.L.: Predicting perturbation patterns from the topology of biological networks. Proc. Natl. Acad. Sci. 115(27), E6375–E6383 (2018)
Teng, X., Liu, J., Li, L.: A synchronous feature learning method for multiplex network embedding. Inform. Sci. 574, 176–191 (2021)
Tóth, J., Toman, H., Hajdu, A.: Efficient sampling-based energy function evaluation for ensemble optimization using simulated annealing. Pattern Recogn. 107, 107510 (2020)
Wang, D., Cui, P., Zhu, W.: Structural deep network embedding. In: 22nd ACM International Conference on Knowledge Discovery and Data Mining, pp. 1225–1234. ACM, San Francisco, CA, USA (2016)
Xue, G., Zhong, M., Li, J., Chen, J., Zhai, C., Kong, R.: Dynamic network embedding survey. Neurocomputing 472, 212–223 (2022)
Yang, M., Zhou, M., Kalander, M., Huang, Z., King, I.: Discrete-time temporal network embedding via implicit hierarchical learning in hyperbolic space. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1975–1985 (2021)
Yuan, X., Savarese, P., Maire, M.: Growing efficient deep networks by structured continuous sparsification. arXiv preprint arXiv:2007.15353 (2020)
Zhang, W., Guo, X., Wang, W., Tian, Q., Pan, L., Jiao, P.: Role-based network embedding via structural features reconstruction with degree-regularized constraint. Knowl. Based Syst. 218, 106872 (2021)
Zhao, Z., Zhou, H., Li, C., Tang, J., Zeng, Q.: Deepemlan: deep embedding learning for attributed networks. Inform. Sci. 543, 382–397 (2021)
Zhou, L.k., Yang, Y., Ren, X., Wu, F., Zhuang, Y.: Dynamic network embedding by modeling triadic closure process. In: 32nd AAAI Conference on Artificial Intelligence. AAAI, New Orleans, Louisiana, USA (2018)
Zhou, X., Zhang, W., Xu, H., Zhang, T.: Effective sparsification of neural networks with global sparsity constraint. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3599–3608 (2021)
Zhu, D., Cui, P., Zhang, Z., Pei, J., Zhu, W.: High-order proximity preserved embedding for dynamic networks. IEEE Trans. Knowl. Data Eng. 30(11), 2134–2144 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Tang, H., Shao, Z., Zhang, Y., Ma, L., Lin, Q. (2022). Dynamic Network Embedding by Using Sparse Deep Autoencoder. In: Fang, L., Povey, D., Zhai, G., Mei, T., Wang, R. (eds) Artificial Intelligence. CICAI 2022. Lecture Notes in Computer Science(), vol 13606. Springer, Cham. https://doi.org/10.1007/978-3-031-20503-3_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-20503-3_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20502-6
Online ISBN: 978-3-031-20503-3
eBook Packages: Computer ScienceComputer Science (R0)