Abstract
This paper describes a comparative study of the performance of Generative Adversarial Networks (GANs) through the quality of the generated images by using a few samples. In the deep learning-based systems, the amount and quality of data are important. However, in industrial sites, data acquisition is difficult or limited for some reasons such as security and industrial specificity, etc. Therefore, it is necessary to increase small-scale data to large-scale data for the training model. GANs is one of the representative image generation models using deep learning. Three GANs such as DCGAN, BEGAN, and SinGAN are used to compare the quality of the generated image samples. The comparison is carried out based on the score with different measuring methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN (2017). arXiv:1701.07875
Berthelot, D., Schumm, T., Metz, L.: BEGAN: Boundary Equilibrium Generative Adversarial Networks (2017). arXiv:1703.10717
Beyerer, J., Puente León, F., Frese, C.: Machine Vision: Automated Visual Inspection: Theory, Practice and Applications. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-47794-6
Brock, A., Donahue, J., Simonyan, K.: Large Scale GAN Training for High Fidelity Natural Image Synthesis (2018). arXiv:1809.11096
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255, June 2009
Everingham, M., Eslami, S.M.A., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vision (IJCV) 111(1), 98–136 (2015)
Gauen, K., et al.: Comparison of visual datasets for machine learning. In: IEEE International Conference on Information Reuse and Integration (IRI), pp. 346–355, August 2017
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (NIPS), pp. 2672–2680 (2014)
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems (NIPS), pp. 5767–5777 (2017)
Gurumurthy, S., Sarvadevabhatla, R.K., Babu, R.V.: DeLiGAN: generative adversarial networks for diverse and limited data. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4941–4949, July 2017
He, K., Sun, J.: Statistics of patch offsets for image completion. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part II. LNCS, pp. 16–29. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33709-3_2
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Advances in Neural Information Processing Systems (NIPS), pp. 6626–6637 (2017)
Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976, July 2017
Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4396–4405, June 2019
Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation (2017). arXiv:1710.10196
Karras, T., Laine, S., Aila, T.: A Style-Based Generator Architecture for Generative Adversarial Networks (2018). arXiv:1812.04948
Koppal, S.J.: Lambertian Reflectance, pp. 441–443. Springer, Boston (2014). https://doi.org/10.1007/978-0-387-31439-6
Lempitsky, V., Vedaldi, A., Ulyanov, D.: Deep image prior. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9446–9454, June 2018
Lin, T.Y., et al.: Microsoft coco: common objects in context (2014). arXiv:1405.0312
Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 2813–2821, October 2017
Mechrez, R., Shechtman, E., Zelnik-Manor, L.: Saliency driven image manipulation. In: IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1368–1376, March 2018
Metz, L., Poole, B., Pfau, D., Sohl-Dickstein, J.: Unrolled Generative Adversarial Networks (2016). arXiv:1611.02163
Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral Normalization for Generative Adversarial Networks (2018). arXiv:1802.05957
Osokin, A., Chessel, A., Salas, R.E.C., Vaggi, F.: GANs for Biological Image Synthesis (2017). arXiv:1708.04692
Radford, A., Metz, L., Chintala, S.: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (2015). arXiv:1511.06434
Roh, Y., Heo, G., Whang, S.E.: A Survey on Data Collection for Machine Learning: a Big Data - AI Integration Perspective (2018). arXiv:1811.03402
Salimans, T., et al.: Improved techniques for training GANs. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems 29, pp. 2234–2242 (2016)
Shaham, T.R., Dekel, T., Michaeli, T.: SinGAN: learning a generative model from a single natural image. In: IEEE International Conference on Computer Vision (ICCV), pp. 4570–4580 (2019)
Szegedy, C., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, June 2015
Cho, T.S., Butman, M., Avidan, S., Freeman, W.T.: The patch transform and its applications to image editing. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8, June 2008
Turhan, C.G., Bilge, H.S.: Recent trends in deep generative models: a review. In: International Conference on Computer Science and Engineering (UBMK), pp. 574–579, September 2018
Wang, Y., Wu, C., Herranz, L., van de Weijer, J., Gonzalez-Garcia, A., Raducanu, B.: Transferring GANs: generating images from limited data. In: European Conference on Computer Vision (ECCV), pp. 220–236 (2018)
Zhang, H., et al.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 5908–5916, October 2017
Zhang, H., et al.: StackGAN++: realistic image synthesis with stacked generative adversarial networks. IEEE Trans. Pattern Anal. Mach. Intell. 41(8), 1947–1962 (2019)
Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-Attention Generative Adversarial Networks (2018). arXiv:1805.08318
Zhao, J., Mathieu, M., LeCun, Y.: Energy-based Generative Adversarial Network (2016). arXiv:1609.03126
Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 2242–2251, October 2017
Acknowledgement
This work was supported by the Technology development Program (S2760246) funded by the Ministry of SMEs and Startups (MSS, Korea).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Seo, D., Ha, Y., Ha, S., Jo, KH., Kang, HD. (2020). Study of GANs Using a Few Images for Sealer Inspection Systems. In: Ohyama, W., Jung, S. (eds) Frontiers of Computer Vision. IW-FCV 2020. Communications in Computer and Information Science, vol 1212. Springer, Singapore. https://doi.org/10.1007/978-981-15-4818-5_17
Download citation
DOI: https://doi.org/10.1007/978-981-15-4818-5_17
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-4817-8
Online ISBN: 978-981-15-4818-5
eBook Packages: Computer ScienceComputer Science (R0)