Skip to main content

A Low Spectral Bias Generative Adversarial Model for Image Generation

  • Conference paper
  • First Online:
  • 582 Accesses

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1628))

Abstract

We propose a systematic analysis of the neglected spectral bias in the frequency domain in this paper. Traditional generative adversarial networks (GANs) try to fulfill the details of images by designing specific network architectures or losses, focusing on generating visually qualitative images. The convolution theorem shows that image processing in the frequency domain is parallelizable and performs better and faster than that in the spatial domain. However, there is little work about discussing the bias of frequency features between the generated images and the real ones. In this paper, we first empirically demonstrate the general distribution bias across datasets and GANs with different sampling methods. Then, we explain the causes of the spectral bias through the deduction that reconsiders the sampling process of the GAN generator. Based on these studies, we provide a low-spectral-bias hybrid generative model to reduce the spectral bias and improve the quality of the generated images.

This work is supported in part by the National Key Research and Development Program of China under Grant no. 2020YFB1806403.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  2. Khayatkhoei, M., Singh, M.K., Elgammal, A.: Disconnected manifold learning for generative adversarial networks. In: Advances in Neural Information Processing Systems, pp. 7343–7353 (2018)

    Google Scholar 

  3. Zhang, D., Zuo, W., Zhang, D., Zhang, H.: Time series classification using support vector machine with Gaussian elastic metric kernel. In: 2010 20th International Conference on Pattern Recognition, pp. 29–32. IEEE (2010)

    Google Scholar 

  4. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: Advances in Neural Information Processing Systems, pp. 2234–2242 (2016)

    Google Scholar 

  5. Rahaman, N., et al.: On the spectral bias of neural networks. In: International Conference on Machine Learning, pp. 5301–5310. PMLR (2019)

    Google Scholar 

  6. Zhang, D., Lin, L., Chen, T., Wu, X., Tan, W., Izquierdo, E.: Content-adaptive sketch portrait generation by decompositional representation learning. IEEE Trans. Image Process. 26(1), 328–339 (2016)

    Article  MathSciNet  Google Scholar 

  7. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

    Article  MathSciNet  Google Scholar 

  8. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein GANs. In: Advances in Neural Information Processing Systems, pp. 5767–5777 (2017)

    Google Scholar 

  9. Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., Choo, J.: StarGAN: Unified generative adversarial networks for multidomain image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797 (2018)

    Google Scholar 

  10. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances in Neural Information Processing Systems, pp. 6626–6637 (2017)

    Google Scholar 

  11. Durall, R., Keuper, M., Keuper, J.: Watch your upconvolution: CNN based generative deep neural networks are failing to reproduce spectral distributions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7890–7899 (2020)

    Google Scholar 

  12. Odena, A., Dumoulin, V., Olah, C.: Deconvolution and checkerboard artifacts. Distill (2016). http://distill.pub/2016/deconv-checkerboard

  13. Wang, H., Wu, X., Huang, Z., Xing, E.P.: High-frequency component helps explain the generalization of convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8684–8694 (2020)

    Google Scholar 

  14. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)

  15. Chen, X., Gupta, A.: An implementation of faster RCNN with study for region sampling. arXiv preprint arXiv:1702.02138 (2017)

  16. Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)

  17. Zhang, W., Sun, J., Tang, X.: Cat head detection - how to effectively exploit shape and texture features. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 802–816. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88693-8_59

    Chapter  Google Scholar 

  18. Baddeley, R.: The correlational structure of natural images and the calibration of spatial representations. Cogn. Sci. 21(3), 351–372 (1997)

    Article  Google Scholar 

  19. Jolicoeur-Martineau, A.: The relativistic discriminator: a key element missing from standard GAN. arXiv preprint arXiv:1807.00734 (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, L., Liu, Z., Liu, P., Cai, L. (2022). A Low Spectral Bias Generative Adversarial Model for Image Generation. In: Wang, Y., Zhu, G., Han, Q., Wang, H., Song, X., Lu, Z. (eds) Data Science. ICPCSEE 2022. Communications in Computer and Information Science, vol 1628. Springer, Singapore. https://doi.org/10.1007/978-981-19-5194-7_26

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-5194-7_26

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-5193-0

  • Online ISBN: 978-981-19-5194-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics