Skip to main content

Study of GANs Using a Few Images for Sealer Inspection Systems

  • Conference paper
  • First Online:
Frontiers of Computer Vision (IW-FCV 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1212))

Included in the following conference series:

Abstract

This paper describes a comparative study of the performance of Generative Adversarial Networks (GANs) through the quality of the generated images by using a few samples. In the deep learning-based systems, the amount and quality of data are important. However, in industrial sites, data acquisition is difficult or limited for some reasons such as security and industrial specificity, etc. Therefore, it is necessary to increase small-scale data to large-scale data for the training model. GANs is one of the representative image generation models using deep learning. Three GANs such as DCGAN, BEGAN, and SinGAN are used to compare the quality of the generated image samples. The comparison is carried out based on the score with different measuring methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN (2017). arXiv:1701.07875

  2. Berthelot, D., Schumm, T., Metz, L.: BEGAN: Boundary Equilibrium Generative Adversarial Networks (2017). arXiv:1703.10717

  3. Beyerer, J., Puente León, F., Frese, C.: Machine Vision: Automated Visual Inspection: Theory, Practice and Applications. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-47794-6

    Book  Google Scholar 

  4. Brock, A., Donahue, J., Simonyan, K.: Large Scale GAN Training for High Fidelity Natural Image Synthesis (2018). arXiv:1809.11096

  5. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255, June 2009

    Google Scholar 

  6. Everingham, M., Eslami, S.M.A., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vision (IJCV) 111(1), 98–136 (2015)

    Article  Google Scholar 

  7. Gauen, K., et al.: Comparison of visual datasets for machine learning. In: IEEE International Conference on Information Reuse and Integration (IRI), pp. 346–355, August 2017

    Google Scholar 

  8. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  9. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (NIPS), pp. 2672–2680 (2014)

    Google Scholar 

  10. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems (NIPS), pp. 5767–5777 (2017)

    Google Scholar 

  11. Gurumurthy, S., Sarvadevabhatla, R.K., Babu, R.V.: DeLiGAN: generative adversarial networks for diverse and limited data. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4941–4949, July 2017

    Google Scholar 

  12. He, K., Sun, J.: Statistics of patch offsets for image completion. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part II. LNCS, pp. 16–29. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33709-3_2

    Chapter  Google Scholar 

  13. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Advances in Neural Information Processing Systems (NIPS), pp. 6626–6637 (2017)

    Google Scholar 

  14. Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976, July 2017

    Google Scholar 

  15. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4396–4405, June 2019

    Google Scholar 

  16. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation (2017). arXiv:1710.10196

  17. Karras, T., Laine, S., Aila, T.: A Style-Based Generator Architecture for Generative Adversarial Networks (2018). arXiv:1812.04948

  18. Koppal, S.J.: Lambertian Reflectance, pp. 441–443. Springer, Boston (2014). https://doi.org/10.1007/978-0-387-31439-6

    Book  Google Scholar 

  19. Lempitsky, V., Vedaldi, A., Ulyanov, D.: Deep image prior. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9446–9454, June 2018

    Google Scholar 

  20. Lin, T.Y., et al.: Microsoft coco: common objects in context (2014). arXiv:1405.0312

  21. Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 2813–2821, October 2017

    Google Scholar 

  22. Mechrez, R., Shechtman, E., Zelnik-Manor, L.: Saliency driven image manipulation. In: IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1368–1376, March 2018

    Google Scholar 

  23. Metz, L., Poole, B., Pfau, D., Sohl-Dickstein, J.: Unrolled Generative Adversarial Networks (2016). arXiv:1611.02163

  24. Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral Normalization for Generative Adversarial Networks (2018). arXiv:1802.05957

  25. Osokin, A., Chessel, A., Salas, R.E.C., Vaggi, F.: GANs for Biological Image Synthesis (2017). arXiv:1708.04692

  26. Radford, A., Metz, L., Chintala, S.: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (2015). arXiv:1511.06434

  27. Roh, Y., Heo, G., Whang, S.E.: A Survey on Data Collection for Machine Learning: a Big Data - AI Integration Perspective (2018). arXiv:1811.03402

  28. Salimans, T., et al.: Improved techniques for training GANs. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems 29, pp. 2234–2242 (2016)

    Google Scholar 

  29. Shaham, T.R., Dekel, T., Michaeli, T.: SinGAN: learning a generative model from a single natural image. In: IEEE International Conference on Computer Vision (ICCV), pp. 4570–4580 (2019)

    Google Scholar 

  30. Szegedy, C., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, June 2015

    Google Scholar 

  31. Cho, T.S., Butman, M., Avidan, S., Freeman, W.T.: The patch transform and its applications to image editing. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8, June 2008

    Google Scholar 

  32. Turhan, C.G., Bilge, H.S.: Recent trends in deep generative models: a review. In: International Conference on Computer Science and Engineering (UBMK), pp. 574–579, September 2018

    Google Scholar 

  33. Wang, Y., Wu, C., Herranz, L., van de Weijer, J., Gonzalez-Garcia, A., Raducanu, B.: Transferring GANs: generating images from limited data. In: European Conference on Computer Vision (ECCV), pp. 220–236 (2018)

    Google Scholar 

  34. Zhang, H., et al.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 5908–5916, October 2017

    Google Scholar 

  35. Zhang, H., et al.: StackGAN++: realistic image synthesis with stacked generative adversarial networks. IEEE Trans. Pattern Anal. Mach. Intell. 41(8), 1947–1962 (2019)

    Article  Google Scholar 

  36. Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-Attention Generative Adversarial Networks (2018). arXiv:1805.08318

  37. Zhao, J., Mathieu, M., LeCun, Y.: Energy-based Generative Adversarial Network (2016). arXiv:1609.03126

  38. Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 2242–2251, October 2017

    Google Scholar 

Download references

Acknowledgement

This work was supported by the Technology development Program (S2760246) funded by the Ministry of SMEs and Startups (MSS, Korea).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyun-Deok Kang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Seo, D., Ha, Y., Ha, S., Jo, KH., Kang, HD. (2020). Study of GANs Using a Few Images for Sealer Inspection Systems. In: Ohyama, W., Jung, S. (eds) Frontiers of Computer Vision. IW-FCV 2020. Communications in Computer and Information Science, vol 1212. Springer, Singapore. https://doi.org/10.1007/978-981-15-4818-5_17

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-4818-5_17

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-4817-8

  • Online ISBN: 978-981-15-4818-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics