Skip to main content

Joint Deep Recurrent Network Embedding and Edge Flow Estimation

  • Conference paper
  • First Online:
Intelligent Computing Methodologies (ICIC 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12465))

Included in the following conference series:

Abstract

The two most important tasks of network analysis are network embedding and edge flow estimation. The network embedding task seeks to represent each node as a continuous vector, and the edge flow estimation seeks to predict the flow direction and amount along each edge given some known flows of edges. In the past works, they are always studied separately, while their inner connection are completely ignored. In this paper, we fill this gap by building a joint learning framework for both node embedding and flow amount learning. We firstly use a long short-term memory network (LSTM) model to estimate the embedding of a node from its neighboring nodes’ embeddings, meanwhile we use the same LSTM model with a multi-layer perceptron (MLP) to estimate a value of the node which presents its importance over the network. The node value is further used to regularize the edge flow learning, so that for each node the balance of flowing-in and flowing-out reach the node value. We simultaneously minimize the reconstruction error of neighborhood LSTM for each node, the approximation error of node value, and the consistency loss between node value and its conjunctive edge flow values. Experiments show the advantage of the proposed algorithm over benchmark datasets.

H. Mo—Co first author.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Benesty, J., Chen, J., Huang, Y., Cohen, I.: Pearson correlation coefficient. In: Cohen, I., Huang, Y., Chen, J., Benesty, J. (eds.) Noise Reduction in Speech Processing. STSP, vol. 2, pp. 1–4. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-00296-0_5

    Chapter  Google Scholar 

  2. Bourlard, H., Kamp, Y.: Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern. 59(4-5), 291–294 (1988). https://doi.org/10.1007/BF00332918

    Article  MathSciNet  MATH  Google Scholar 

  3. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3(1), 1–122 (2011)

    MATH  Google Scholar 

  4. Even, S., Tarjan, R.E.: Network flow and testing graph connectivity. SIAM J. Comput. 4(4), 507–518 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  5. Geng, Y., et al.: Learning convolutional neural network to maximize pos@ top performance measure. In: ESANN 2017 - Proceedings, pp. 589–594 (2016)

    Google Scholar 

  6. Geng, Y., et al.: A novel image tag completion method based on convolutional neural transformation. In: Lintas, A., Rovetta, S., Verschure, P.F.M.J., Villa, A.E.P. (eds.) ICANN 2017. LNCS, vol. 10614, pp. 539–546. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68612-7_61

    Chapter  Google Scholar 

  7. Gleich, D.F., Saunders, M.: Models and algorithms for pagerank sensitivity. Stanford University, Stanford (2009)

    Google Scholar 

  8. Golub, G.H., Reinsch, C.: Singular value decomposition and least squares solutions. In: Bauer, F.L. (ed.) Linear Algebra. HDBKAUCO, vol. 2, pp. 134–151. Springer, Heidelberg (1971). https://doi.org/10.1007/978-3-662-39778-7_10

    Chapter  Google Scholar 

  9. Grover, A., Leskovec, J.: node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 855–864. ACM (2016)

    Google Scholar 

  10. Jia, J., Schaub, M.T., Segarra, S., Benson, A.R.: Graph-based semi-supervised & active learning for edge flows. arXiv preprint arXiv:1905.07451 (2019)

  11. Kunegis, J.: KONECT: the Koblenz network collection. In: Proceedings of the 22nd International Conference on World Wide Web, pp. 1343–1350. ACM (2013)

    Google Scholar 

  12. Liwicki, M., Graves, A., FernĂ ndez, S., Bunke, H., Schmidhuber, J.: A novel approach to on-line handwriting recognition based on bidirectional long short-term memory networks. In: Proceedings of the 9th International Conference on Document Analysis and Recognition, ICDAR 2007 (2007)

    Google Scholar 

  13. Reca, J., MartĂ­nez, J.: Genetic algorithms for the design of looped irrigation water distribution networks. Water Resour. Res. 42(5) (2006)

    Google Scholar 

  14. Tu, K., Cui, P., Wang, X., Yu, P.S., Zhu, W.: Deep recursive network embedding with regular equivalence. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2357–2366. ACM (2018)

    Google Scholar 

  15. Zhang, G., et al.: Learning convolutional ranking-score function by query preference regularization. In: Yin, H., et al. (eds.) IDEAL 2017. LNCS, vol. 10585, pp. 1–8. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68935-7_1

    Chapter  Google Scholar 

  16. Zhang, G., Liang, G., Su, F., Qu, F., Wang, J.-Y.: Cross-domain attribute representation based on convolutional neural network. In: Huang, D.-S., Gromiha, M.Michael, Han, K., Hussain, A. (eds.) ICIC 2018. LNCS (LNAI), vol. 10956, pp. 134–142. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-95957-3_15

    Chapter  Google Scholar 

  17. Zhang, Z., Cui, P., Wang, X., Pei, J., Yao, X., Zhu, W.: Arbitrary-order proximity preserved network embedding. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2778–2786. ACM (2018)

    Google Scholar 

  18. Zhu, D., Cui, P., Wang, D., Zhu, W.: Deep variational network embedding in wasserstein space. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2827–2836. ACM (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haoran Mo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liang, G., Mo, H., Wang, Z., Dong, CQ., Wang, JY. (2020). Joint Deep Recurrent Network Embedding and Edge Flow Estimation. In: Huang, DS., Premaratne, P. (eds) Intelligent Computing Methodologies. ICIC 2020. Lecture Notes in Computer Science(), vol 12465. Springer, Cham. https://doi.org/10.1007/978-3-030-60796-8_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-60796-8_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-60795-1

  • Online ISBN: 978-3-030-60796-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics