Skip to main content
Log in

A noise injection strategy for graph autoencoder training

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Wang Y, Xu B, Kwak M, Zeng X (2020) A simple training strategy for graph autoencoder. In: Proceedings of the international conference on machine learning and computing (ICMLC), pp 341–345. https://doi.org/10.1145/3383972.3383985

  2. Tahmasebi H, Ravanmehr R, Mohamadrezaei R (2020) Social movie recommender system based on deep autoencoder network using Twitter data. Neural Comput Appl. https://doi.org/10.1007/s00521-020-05085-1

    Article  Google Scholar 

  3. Cai H, Zheng VW, Chang KCC (2018) A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE Trans Knowl Data Eng 30:1616–1637. https://doi.org/10.1109/TKDE.2018.2807452

    Article  Google Scholar 

  4. Li B, Pi D (2020) Network representation learning: a systematic literature review. Neural Comput Appl. https://doi.org/10.1007/s00521-020-04908-5

    Article  Google Scholar 

  5. Pan S, Hu R, Fung SF et al (2020) Learning graph embedding with adversarial training methods. IEEE Trans Cybern 50:2475–2487. https://doi.org/10.1109/TCYB.2019.2932096

    Article  Google Scholar 

  6. Pan S, Hu R, Long G, et al (2018) Adversarially regularized graph autoencoder for graph embedding. In: Proceedings of 27th international joint conference artificial intelligence, pp 2609–2615. https://doi.org/10.1523/JNEUROSCI.1317-08.2008

  7. Zhang D, Yin J, Zhu X, Zhang C (2018) Network Representation Learning: A Survey. IEEE Trans Big Data. https://doi.org/10.1109/tbdata.2018.2850013

    Article  Google Scholar 

  8. Kipf TN, Welling M (2016) Variational graph auto-encoders. In: NIPS workshop on bayesian deep learning

  9. Wang D, Cui P, Zhu W (2016) Structural deep network embedding. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining, pp 1225–1234

  10. Tu K, Cui P, Wang X, et al (2018) Deep recursive network embedding with regular equivalence. In: Proceedings of ACM SIGKDD international conference knowledge discovery data min, pp 2357–2366. https://doi.org/10.1145/3219819.3220068

  11. Samanta B, DE A, Jana G, et al (2019) NeVAE: A deep generative model for molecular graphs. In: Proceedings of the AAAI conference on artificial intelligence. pp 1110–1117

  12. Grover A, Zweig A, Ermon S (2019) Graphite: Iterative generative modeling of graphs. In: Proceedings of machine learning research. pp 2434--2444

  13. Cao S, Lu W, Xu Q (2016) Deep neural networks for learning graph representations. In: Proceedings of 30th AAAI conference on artificial intelligence, pp 1145–1152

  14. Yu W, Zheng C, Cheng W, et al (2018) Learning deep network representations with adversarially. In: Proceedings of the international conference on knowledge discovery and data mining, pp 2663–2671

  15. Goodfellow I, Pouget-Abadie J, Mirza M et al (2014) Generative adversarial nets. Adv Neural Inf Process Syst 27:2672–2680

    Google Scholar 

  16. Elman JL, Zipser D (1988) Learning the hidden structure of speech. J Acoust Soc Am 83:1615–1626. https://doi.org/10.1121/1.395916

    Article  Google Scholar 

  17. Sietsma J, Dow RJF (1991) Creating artificial neural networks that generalize. Neural Netw 4:67–79. https://doi.org/10.1016/0893-6080(91)90033-2

    Article  Google Scholar 

  18. Holmstrom L, Koistinen P (1992) Using additive noise in back propagation training. IEEE Trans Neural Netw 3:24–38. https://doi.org/10.1109/72.105415

    Article  Google Scholar 

  19. Skurichina M, Raudys Š, Duin RPW (2000) K-nearest neighbors directed noise injection in multilayer perceptron training. IEEE Trans Neural Netw 11:504–511. https://doi.org/10.1109/72.839019

    Article  Google Scholar 

  20. Brown WM, Gedeon TD, Groves DI (2003) Use of noise to augment training data: a neural network method of mineral-potential mapping in regions of limited known deposit examples. Nat Resour Res 12:141–152. https://doi.org/10.1023/A:1024218913435

    Article  Google Scholar 

  21. Matsuoka K (1992) Noise injection into inputs in back-propagation learning. IEEE Trans Syst Man Cybern 22:436–440. https://doi.org/10.1109/21.370200

    Article  Google Scholar 

  22. Reed R, Marks RJ, Oh S (1995) Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter. IEEE Trans Neural Netw 6:529–538. https://doi.org/10.1109/72.377960

    Article  Google Scholar 

  23. Bishop CM (1995) Training with noise is equivalent to Tikhonov regularization. Neural Comput 7:108–116. https://doi.org/10.1162/neco.1995.7.1.108

    Article  Google Scholar 

  24. Grandvalet Y, Canu S, Boucheron S (1997) Noise injection: theoretical prospects. Neural Comput 9:1093–1108. https://doi.org/10.1162/neco.1997.9.5.1093

    Article  Google Scholar 

  25. An G (1996) The Effects of adding noise during backpropagation training on a generalization performance. Neural Comput 8:643–674. https://doi.org/10.1162/neco.1996.8.3.643

    Article  Google Scholar 

  26. Piotrowski AP, Napiorkowski JJ (2013) A comparison of methods to avoid overfitting in neural networks training in the case of catchment runoff modelling. J Hydrol 476:97–111. https://doi.org/10.1016/j.jhydrol.2012.10.019

    Article  Google Scholar 

  27. Wright WA (1999) Bayesian approach to neural-network modeling with input uncertainty. IEEE Trans Neural Netw 10:1261–1270. https://doi.org/10.1109/72.809073

    Article  Google Scholar 

  28. Wright WA, Ramage G, Cornford D, Nabney IT (2000) Neural network modelling with input uncertainty: theory and application. J VLSI Signal Process Syst Signal Image Video Technol 26:169–188. https://doi.org/10.1023/A:1008111920791

    Article  MATH  Google Scholar 

  29. Zhang S, Tong H, Xu J, Maciejewski R (2019) Graph convolutional networks: a comprehensive review. Comput Soc Netw 6:1–23. https://doi.org/10.1186/s40649-019-0069-y

    Article  Google Scholar 

  30. McDowell LK, Gupta KM, Aha DW (2009) Cautious collective classification. J Mach Learn Res 10:2777–2836

    MathSciNet  MATH  Google Scholar 

  31. Giles CL, Bollacker KD, Lawrence S (1998) CiteSeer: an automatic citation indexing system. In: Proceedings of ACM international conference digital library, pp 89–98

  32. Kingma DP, Ba JL (2015) Adam: a method for stochastic optimization. In: Proceedings of the 3rd international conference on learning representations

  33. Fawcett T (2006) An introduction to ROC analysis. Pattern Recognit Lett 27:861–874. https://doi.org/10.1016/j.patrec.2005.10.010

    Article  Google Scholar 

  34. McClish DK (1989) Analyzing a portion of the ROC curve. Med Decis Mak 9:190–195. https://doi.org/10.1177/0272989X8900900307

    Article  Google Scholar 

  35. Wikipedia entry for the Receiver operating characteristic. https://en.wikipedia.org/wiki/Receiver_operating_characteristic. Accessed 9 Jan 2019

Download references

Acknowledgements

This work was partially supported by the National Science Foundation under Grant Number 1813252.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yingfeng Wang.

Ethics declarations

Conflict of interest

All authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A preliminary version of this work appeared in the proceedings of the international conference on machine learning and computing (ICMLC 2020) [1].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Xu, B., Kwak, M. et al. A noise injection strategy for graph autoencoder training. Neural Comput & Applic 33, 4807–4814 (2021). https://doi.org/10.1007/s00521-020-05283-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-05283-x

Keywords

Navigation