skip to main content
10.1145/3340531.3411945acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Dynamic Representation Learning for Large-Scale Attributed Networks

Published:19 October 2020Publication History

ABSTRACT

Network embedding, which aims at learning low-dimensional representations of nodes in a network, has drawn much attention for various network mining tasks, ranging from link prediction to node classification. In addition to network topological information, there also exist rich attributes associated with network structure, which exerts large effects on the network formation. Hence, many efforts have been devoted to tackling attributed network embedding tasks. However, they are also limited in their assumption of static network data as they do not account for evolving network structure as well as changes in the associated attributes. Furthermore, scalability is a key factor when performing representation learning on large-scale networks with huge number of nodes and edges. In this work, we address these challenges by developing the DRLAN-Dynamic Representation Learning framework for large-scale Attributed Networks. The DRLAN model generalizes the dynamic attributed network embedding from two perspectives: First, we develop an integrative learning framework with an offline batch embedding module to preserve both the node and attribute proximities, and online network embedding model that recursively updates learned representation vectors. Second, we design a recursive pre-projection mechanism to efficiently model the attribute correlations based on the associative property of matrices. Finally, we perform extensive experiments on three real-world network datasets to show the superiority of DRLAN against state-of-the-art network embedding techniques in terms of both effectiveness and efficiency. The source code is available at: https://github.com/ZhijunLiu95/DRLAN.

Skip Supplemental Material Section

Supplemental Material

3340531.3411945.mp4

we develop an integrative learning framework DRLAN with an offline batch embedding module to preserve both the node and attribute proximities, and online network embedding model that recursively updates learned representation vectors.

References

  1. Diogo M Camacho, Katherine M Collins, Rani K Powers, James C Costello, and James J Collins. 2018. Next-generation machine learning for biological networks. Cell, Vol. 173, 7 (2018), 1581--1592.Google ScholarGoogle ScholarCross RefCross Ref
  2. Hongxu Chen, Hongzhi Yin, Weiqing Wang, Hao Wang, Quoc Viet Hung Nguyen, and Xue Li. 2018. PME: projected metric embedding on heterogeneous networks for link prediction. In KDD. ACM, 1177--1186.Google ScholarGoogle Scholar
  3. Hongchang Gao and Heng Huang. 2018. Deep Attributed Network Embedding. In IJCAI. 3364--3370.Google ScholarGoogle Scholar
  4. Hongyang Gao, Zhengyang Wang, and Shuiwang Ji. 2018. Large-scale learnable graph convolutional networks. In KDD. 1416--1424.Google ScholarGoogle Scholar
  5. Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In KDD. ACM, 855--864.Google ScholarGoogle Scholar
  6. Chao Huang, Baoxu Shi, Xuchao Zhang, Xian Wu, et al. 2019 a. Similarity-aware network embedding with self-paced learning. In CIKM. 2113--2116.Google ScholarGoogle Scholar
  7. Chao Huang, Xian Wu, Xuchao Zhang, Chuxu Zhang, Jiashu Zhao, Dawei Yin, and Nitesh V Chawla. 2019 c. Online purchase prediction via multi-scale modeling of behavior dynamics. In KDD. 2613--2622.Google ScholarGoogle Scholar
  8. Xiao Huang, Jundong Li, and Xia Hu. 2017. Label informed attributed network embedding. In WSDM. 731--739.Google ScholarGoogle Scholar
  9. Xiao Huang, Qingquan Song, Fan Yang, and Xia Hu. 2019 b. Large-Scale Heterogeneous Feature Embedding. In AAAI. 3878--3885.Google ScholarGoogle Scholar
  10. William B Johnson and Joram Lindenstrauss. 1984. Extensions of Lipschitz mappings into a Hilbert space. Contemporary mathematics, Vol. 26, 189--206 (1984), 1.Google ScholarGoogle Scholar
  11. Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, et al. 2017. Attributed network embedding for learning in a dynamic environment. In CIKM. 387--396.Google ScholarGoogle Scholar
  12. Ping Li, Trevor J Hastie, and Kenneth W Church. 2006. Very sparse random projections. In KDD. ACM, 287--296.Google ScholarGoogle Scholar
  13. Ruirui Li, Xian Wu, Xian Wu, and Wei Wang. 2020. Few-Shot Learning for New User Recommendation in Location-based Social Networks. In WWW. 2472--2478.Google ScholarGoogle Scholar
  14. Lizi Liao, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua. 2018. Attributed social network embedding. TKDE, Vol. 30, 12 (2018), 2257--2270.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Linyuan Lü and Tao Zhou. 2011. Link prediction in complex networks: A survey. Physica A: statistical mechanics and its applications, Vol. 390, 6 (2011), 1150--1170.Google ScholarGoogle Scholar
  16. Yuanfu Lu, Xiao Wang, Chuan Shi, Philip S Yu, and Yanfang Ye. 2019. Temporal Network Embedding with Micro-and Macro-dynamics. In CIKM. 469--478.Google ScholarGoogle Scholar
  17. Zaiqiao Meng, Shangsong Liang, Hongyan Bao, and Xiangliang Zhang. 2019. Co-embedding Attributed Networks. In WSDM. ACM, 393--401.Google ScholarGoogle Scholar
  18. Giang Hoang Nguyen, John Boaz Lee, Ryan A Rossi, Nesreen K Ahmed, Eunyee Koh, and Sungchul Kim. 2018. Continuous-time dynamic network embeddings. In Companion Proceedings of the The Web Conference. 969--976.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. 2016. Asymmetric transitivity preserving graph embedding. In KDD. ACM, 1105--1114.Google ScholarGoogle Scholar
  20. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In KDD. 701--710.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Chi Wang, Kuansan Wang, and Jie Tang. 2019. NetSMF: Large-Scale Network Embedding as Sparse Matrix Factorization. In WWW. 1509--1520.Google ScholarGoogle Scholar
  22. Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang. 2020. DySAT: Deep Neural Representation Learning on Dynamic Graphs via Self-Attention Networks. In WSDM. 519--527.Google ScholarGoogle Scholar
  23. Yiwei Sun, Suhang Wang, Tsung-Yu Hsieh, Xianfeng Tang, and Vasant Honavar. 2019. Megan: A generative adversarial network for multi-view network embedding. In IJCAI. 3527--3533.Google ScholarGoogle Scholar
  24. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In WWW. 1067--1077.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Xianfeng Tang, Boqing Gong, Yanwei Yu, Huaxiu Yao, Yandong Li, Haiyong Xie, and Xiaoyu Wang. 2019. Joint modeling of dense and incomplete trajectories for citywide traffic volume inference. In WWW. 1806--1817.Google ScholarGoogle Scholar
  26. Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. 2019. DyRep: Learning Representations over Dynamic Graphs. In ICLR.Google ScholarGoogle Scholar
  27. Cunchao Tu, Han Liu, Zhiyuan Liu, and Maosong Sun. 2017. Cane: Context-aware network embedding for relation modeling. In ACL. 1722--1731.Google ScholarGoogle Scholar
  28. Wei Wu, Bin Li, Ling Chen, and Chengqi Zhang. 2018. Efficient Attributed Network Embedding via Recursive Randomized Hashing. In IJCAI. 2861--2867.Google ScholarGoogle Scholar
  29. Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Chang. 2015. Network representation learning with rich text information. In IJCAI. 2111--2117.Google ScholarGoogle Scholar
  30. Cheng Yang, Maosong Sun, et al. 2017. Fast Network Embedding Enhancement via High Order Proximity Approximation. In IJCAI. 3894--3900.Google ScholarGoogle Scholar
  31. Yanwei Yu, Xianfeng Tang, Huaxiu Yao, Xiuwen Yi, and Zhenhui Li. 2019. Citywide Traffic Volume Inference with Surveillance Camera Records. IEEE Transactions on Big Data (2019).Google ScholarGoogle Scholar
  32. Yanwei Yu, Hongjian Wang, and Zhenhui Li. 2018a. Inferring mobility relationship via graph embedding. IMWUT, Vol. 2, 3 (2018), 1--21.Google ScholarGoogle Scholar
  33. Yanwei Yu, Huaxiu Yao, Hongjian Wang, Xianfeng Tang, and Zhenhui Li. 2018b. Representation learning for large-scale dynamic networks. In DASFAA. Springer, 526--541.Google ScholarGoogle Scholar
  34. Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and Nitesh V Chawla. 2019. Heterogeneous graph neural network. In KDD. 793--803.Google ScholarGoogle Scholar
  35. Ziwei Zhang, Peng Cui, Haoyang Li, Xiao Wang, and Wenwu Zhu. 2018a. Billion-scale Network Embedding with Iterative Random Projection. In ICDM. IEEE, 787--796.Google ScholarGoogle Scholar
  36. Ziwei Zhang, Peng Cui, Jian Pei, Xiao Wang, and Wenwu Zhu. 2018b. Timers: Error-bounded svd restart on dynamic networks. In AAAI.Google ScholarGoogle Scholar
  37. Lekui Zhou, Yang Yang, Xiang Ren, Fei Wu, and Yueting Zhuang. 2018. Dynamic network embedding by modeling triadic closure process. In AAAI.Google ScholarGoogle Scholar
  38. Dingyuan Zhu, Peng Cui, Ziwei Zhang, Jian Pei, and Wenwu Zhu. 2018. High-order proximity preserved embedding for dynamic networks. TKDE, Vol. 30, 11 (2018), 2134--2144.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Dynamic Representation Learning for Large-Scale Attributed Networks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CIKM '20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management
        October 2020
        3619 pages
        ISBN:9781450368599
        DOI:10.1145/3340531

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 19 October 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,861of8,427submissions,22%

        Upcoming Conference

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader