skip to main content
10.1145/3483845.3483874acmotherconferencesArticle/Chapter ViewAbstractPublication PagesccrisConference Proceedingsconference-collections
research-article

Effect of regularity on learning in GANs

Published:22 October 2021Publication History

ABSTRACT

Generative Adversarial Networks (GANs) are algorithmic architectures that use two neural networks, pitting one against the opposite (thus the “adversarial”) so as to come up with new, synthetic instances of data that can pass for real data. GANs have been highly successful on datasets like MNIST, SVHN, CelebA, etc but training a GAN on large scale datasets like ImageNet is a challenging problem because they are deemed as not very regular. In this paper, we perform empirical experiments using parameterized synthetic datasets to probe how regularity of a dataset affects learning in GANs. We emperically show that regular datasets are easier to model for GANs because of their stable training process.

References

  1. [1] Arjovsky, M., Chintala, S., and Bottou, L. (2017a). “Wasserstein gan.”arXiv preprintarXiv:1701.07875. London, vol. A247, pp. 529–551, April 1955.Google ScholarGoogle Scholar
  2. [2] Arjovsky, M., Chintala, S., and Bottou, L. (2017b). “Wasserstein generative adversarialnetworks.”Proceedings of the 34th International Conference on Machine Learning, D. Pre-cup and Y. W. Teh, eds., Vol. 70 ofProceedings of Machine Learning Research, InternationalConvention Centre, Sydney, Australia. PMLR, 214–223.Google ScholarGoogle Scholar
  3. [3] Chalmeta, R., Hurtado, F., Sacristán, V., and Saumell, M. (2013). “Measuring regularity ofconvex polygons.”Computer-Aided Design, 45(2), 93–104.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., andCourville, A. (2014). “ bengio, y.(2014). generative adversarial nets.”Advances in neuralinformation processing systems, 27.Google ScholarGoogle Scholar
  5. [5] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S.,Courville, A., and Bengio, Y. (2014). “Generative adversarial nets.”Advances in neuralinformation processing systems. 2672–2680.Google ScholarGoogle Scholar
  6. [6] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. (2017). “Ganstrained by a two time-scale update rule converge to a local nash equilibrium.”Advances inneural information processing systems. 6626–6637.Google ScholarGoogle Scholar
  7. [7] Huszár, F. (2015). “How (not) to train your generative model: Scheduled sampling, likeli-hood, adversary?.”arXiv preprint arXiv:1511.05101.Google ScholarGoogle Scholar
  8. [8] Lucic, M., Kurach, K., Michalski, M., Gelly, S., and Bousquet, O. (2018). “Are ganscreated equal? a large-scale study.”Advances in neural information processing systems.700–709.Google ScholarGoogle Scholar
  9. [9] Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., and Paul Smolley, S. (2017). “Leastsquares generative adversarial networks.”Proceedings of the IEEE International Conferenceon Computer Vision. 2794–2802.Google ScholarGoogle Scholar
  10. [10] Odena, A., Olah, C., and Shlens, J. (2017). “Conditional image synthesis with auxiliaryclassifier gans.”Proceedings of the 34th International Conference on Machine Learning-Volume 70, JMLR. org. 2642–2651.Google ScholarGoogle Scholar
  11. [11]Radford, A., Metz, L., and Chintala, S. (2015). “Unsupervised representation learningwith deep convolutional generative adversarial networks.”arXiv preprint arXiv:1511.06434.Google ScholarGoogle Scholar
  12. [12]Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. (2016).“Improved techniques for training gans.”Advances in neural information processing sys-tems. 2234–2242.40Google ScholarGoogle Scholar
  13. [13]Theis, L., Oord, A. v. d., and Bethge, M. (2015). “A note on the evaluation of generativemodels.”arXiv preprint arXiv:1511.01844.Google ScholarGoogle Scholar
  14. [14] Wu, Y., Burda, Y., Salakhutdinov, R., and Grosse, R. (2016). “On the quantitative analysisof decoder-based generative models.”arXiv preprint arXiv:1611.04273.Google ScholarGoogle Scholar
  15. [15] Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., and Metaxas, D. N. (2017).“Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial net-works.”Proceedings of the IEEE international conference on computer vision. 5907–5915.Google ScholarGoogle Scholar
  16. [16] Such, Felipe Petroski and Rawal, Aditya and Lehman, Joel and Stanley, Kenneth O and Clune, Jeff (2019).”Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data.”arXiv preprint arXiv:1912.07768Google ScholarGoogle Scholar
  17. [17] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. ”Imagenet classification with deep convolutional neural networks.” Advances in neural information processing systems 25 (2012): 1097-1105.Google ScholarGoogle Scholar
  18. [18] LeCun, Y. & Cortes, C. (2010). MNIST handwritten digit database.,.Google ScholarGoogle Scholar
  19. [19] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng Reading Digits in Natural Images with Unsupervised Feature Learning NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011.Google ScholarGoogle Scholar
  20. [20] Liu, Ziwei, et al. ”Deep learning face attributes in the wild.” Proceedings of the IEEE international conference on computer vision. 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Heusel, Martin, et al. ”Gans trained by a two time-scale update rule converge to a local nash equilibrium.” Advances in neural information processing systems 30 (2017).Google ScholarGoogle Scholar
  22. [22] Yu, Fisher, et al. ”Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop.” arXiv preprint arXiv:1506.03365 (2015).Google ScholarGoogle Scholar

Index Terms

  1. Effect of regularity on learning in GANs
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        CCRIS '21: Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System
        August 2021
        278 pages
        ISBN:9781450390453
        DOI:10.1145/3483845

        Copyright © 2021 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 October 2021

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited
      • Article Metrics

        • Downloads (Last 12 months)9
        • Downloads (Last 6 weeks)0

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format