Skip to main content
Log in

Improving generative adversarial networks with simple latent distributions

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Generative Adversarial Networks (GANs) have drawn great attention recently since they are the powerful models to generate high-quality images. Although GANs have achieved great success, they usually suffer from unstable training and consequently may lead to the poor generations in some cases. Such drawback is argued mainly due to the difficulties in measuring the divergence between the highly complicated the real and fake data distributions, which are normally in the high-dimensional space. To tackle this problem, previous researchers attempt to search a proper divergence capable of measuring the departure of the complex distributions. In contrast, we attempt to alleviate this problem from a different perspective: while retaining the information as much as possible of the original high dimensional distributions, we learn and leverage an additional latent space where simple distributions are defined in a low-dimensional space; as a result, we can readily compute the distance between two simple distributions with an available divergence measurement. Concretely, to retain the data information, the mutual information is maximized between the variables for the high dimensional complex distributions and the low dimensional simple distributions. The departure of the resulting simple distributions are then measured in the original way of GANs. Additionally, for simplifying the optimization further, we optimize directly the lower bound for mutual information. Termed as SimpleGAN, we conduct the proposed approach over the several different baseline models, i.e., conventional GANs, DCGAN, WGAN-GP, WGAN-GP-res, and LSWGAN-GP on the benchmark CIFAR-10 and STL-10 datasets. SimpleGAN shows the obvious superiority on these baseline models. Furthermore, in comparison with the existing methods measuring directly the distribution departure in the high-dimensional space, our method clearly demonstrates its superiority. Finally, a series of experiments show the advantages of the proposed SimpleGAN.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Adler J, Lunz S (2018) Banach wasserstein gan. In: Advances in neural information processing systems, pp 6754–6763

  2. Alam M, Vidyaratne L, Iftekharuddin KM (2018) Novel deep generative simultaneous recurrent model for efficient representation learning. In: Neural networks, Elsevier, vol 107, pp 12–22

  3. Arjovsky M, Chintala S, Bottou, L (2017) Wasserstein generative adversarial networks. In: International conference on machine learning, pp 214–223

  4. Berthelot D, Schumm T, Metz L (2017) Began: boundary equilibrium generative adversarial networks. Preprint arXiv:1703.10717

  5. Brock A, Donahue J, Simonyan K (2018) Large scale gan training for high fidelity natural image synthesis. Preprint arXiv:1809.11096

  6. Chaari M, Fekih A, Seibi AC, Hmida JB (2018) A frequency-domain approach to improve anns generalization quality via proper initialization. In: Neural networks, Elsevier, vol 104, pp 26–39

  7. Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: Advances in neural information processing systems, pp 2172–2180

  8. Chongxuan L, Xu T, Zhu J, Zhang B (2017) Triple generative adversarial nets. In: advances in neural information processing systems, pp 4088–4098

  9. Dai Z, Almahairi A, Bachman P, Hovy E, Courville A (2017) Calibrating energy-based generative adversarial networks. Preprint arXiv:1702.01691

  10. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680

  11. Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC (2017) Improved training of wasserstein gans. In: Advances in neural information processing systems, pp 5767–5777

  12. Huang K, Hussain A, Wang Q, Zhang R (2019) Deep learning: fundamentals, theory, and applications. Springer, ISBN 978-3-030-06072-5

  13. Jin X, Xie G, Huang K, Cao H, Wang Q (2019) Discriminant zero-shot learning with center loss. Cogn Comput 11:503–512

    Article  Google Scholar 

  14. Kamimura R (2017) Collective mutual information maximization to unify passive and positive approaches for improving interpretation and generalization. In: Neural networks, Elsevier, vol 90, pp 56–71

  15. Kiasari MA, Moirangthem DS, Lee M (2018) Coupled generative adversarial stacked auto-encoder: Cogasa. In: Neural Networks, Elsevier, vol 100, pp 1–9

  16. Kingma DP, Welling M (2013) Auto-encoding variational bayes. Preprint arXiv:1312.6114

  17. Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, etal (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4681–4690

  18. Li W, Ding W, Sadasivam R, Cui X, Chen P (2019) His-gan: a histogram-based gan model to improve data generation quality. In: Neural networks, Elsevier, vol 119, pp 31–45

  19. Liu L, Zhang H, Xu X, Zhang Z, Yan S (2020) Collocating clothes with generative adversarial networks cosupervised by categories and attributes: a multidiscriminator framework. IEEE Trans Neural Netw Learn Syst 31:3540–3554. https://doi.org/10.1109/TNNLS.2019.2944979

    Article  MathSciNet  Google Scholar 

  20. Lyu C, Huang K, Liang HN (2015) A unified gradient regularization family for adversarial examples. In: 2015 IEEE international conference on data mining, IEEE, pp 301–309

  21. Mao X, Li Q, Xie H, Lau RYK, Wang Z, Smolley SP (2019) On the effectiveness of least squares generative adversarial networks. IEEE Trans Pattern Anal Mach Intell 41:2947–2960. https://doi.org/10.1109/TPAMI.2018.2872043

    Article  Google Scholar 

  22. Miyato T, Kataoka T, Koyama M, Yoshida Y (2018) Spectral normalization for generative adversarial networks. Preprint arXiv:1802.05957

  23. Nowozin S, Cseke B, Tomioka R (2016) f-gan: training generative neural samplers using variational divergence minimization. In: Advances in neural information processing systems, pp 271–279

  24. Radford A, Metz L, Chintala, S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. Preprint arXiv:1511.06434

  25. Roth K, Lucchi A, Nowozin S, Hofmann, T (2017) Stabilizing training of generative adversarial networks through regularization. In: Advances in neural information processing systems, pp 2018–2028

  26. Sajjadi MS, Parascandolo G, Mehrjou A, Schölkopf B (2018) Tempered adversarial networks. Preprint arXiv:1802.04374

  27. Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. In: Advances in neural information processing systems, pp 2234–2242

  28. Sønderby CK, Caballero J, Theis L, Shi W, Huszár F (2016) Amortised map inference for image super-resolution. Preprint arXiv:1610.04490

  29. Takano W, Kusajima I, Nakamura Y (2016) Generating action descriptions from statistically integrated representations of human motions and sentences. In: Neural networks, Elsevier, vol 80, pp 1–8

  30. Warde-Farley D, Bengio Y (2016) Improving generative adversarial networks with denoising feature matching. In: International conference on learning representations

  31. Wu J, Zhang C, Xue T, Freeman B, Tenenbaum J (2016) Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In: Advances in neural information processing systems, pp 82–90

  32. Yang X, Huang K, Zhang R, Goulermas JY (2019) A novel deep density model for unsupervised learning. Cogn Comput 11:778–788

    Article  Google Scholar 

  33. Yang X, Yan Y, Huang K, Zhang R (2019) Vsb-dvm: an end-to-end bayesian nonparametric generalization of deep variational mixture model. In: 2019 IEEE international conference on data mining. IEEE

  34. Zhang S, Huang K, Zhang R, Hussain A (2019) Generalized adversarial training in riemannian space. In: 2019 IEEE international conference on data mining. IEEE

  35. Zhang S, Qian Z, Huang K, Zhang R, Hussain A (2019) Simplegan stabilizing generative adversarial networks with simple distributions. In: IEEE International conference on data mining workshop, pp 905–910

  36. Zheng YJ, Zhou XH, Sheng WG, Xue Y, Chen SY (2018) Generative adversarial network based telecom fraud detection at the receiving bank. In: Neural networks, Elsevier, vol 102, pp 78–86

  37. Zhong G, Jiao W, Gao W, Huang K (2020) Automatic design of deep networks with neural blocks. Cogn Comput 12:1–12

    Article  Google Scholar 

  38. Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232

Download references

Acknowledgements

The work was partially supported by the following: National Natural Science Foundation of China under No. 61876155; Jiangsu Science and Technology Programme (Natural Science Foundation of Jiangsu Province) under Nos. BE2020006-4, BK20181189; Key Program Special Fund in XJTLU under No. KSF-T-06.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kaizhu Huang.

Ethics declarations

Conflict of interest

We declare that we have no personal relationship or financial with other people and organization which can influence this work inappropriately, there is no professional or other personal interest of any nature or kind in any products, services or companies that could be construed as influencing the position presented in, or the review of, the manuscript entitled “Improving Generative Adversarial Networks with Simple Latent Distributions.”

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, S., Huang, K., Qian, Z. et al. Improving generative adversarial networks with simple latent distributions. Neural Comput & Applic 33, 13193–13203 (2021). https://doi.org/10.1007/s00521-021-05946-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-05946-3

Keywords

Navigation