Abstract
Recently, some generative adversarial network (GAN)-based super-resolution (SR) methods have progressed to the point where they can produce photo-realistic natural images by using a generator (G) and discriminator (D) adversarial scheme. However, vanilla GAN-based SR methods cannot achieve good reconstruction and perceptual fidelity on real-world facial images at the same time. Because of D loss, them are hard to converge stably, which may cause the model collapse. In this paper, we present an Enhanced Discriminative Generative Adversarial Network (EDGAN) for SR facial recognition to achieve better reconstruction and perceptual fidelities. First, we discover that a versatile D boosts the adversarial framework to a preferable Nash equilibrium. Then, we design the D via dense connections, which brings more stable adversarial loss. Furthermore, a novel perceptual loss function, by reusing the intermediate features of D, is used to eliminate the gradient vanishing problem of Gs. To our knowledge, this is the first framework to focus on improving the performance of the D. Quantitatively, experimental results show the advantages of EDGAN on two widely used facial image databases against the state-of-the-art methods with different terms. EDGAN performs sharper and realistic results on real-world facial images with large pose and illumination variations than its competitors.
Keywords
T. Lu—This work is supported by the National Natural Science Foundation of China (61502354, 61501413, 61671332, 61771353, 41501505), the Natural Science Foundation of Hubei Province of China (2012FFA099, 2012FFA134, 2013CF125, 2014CFA130, 2015CFB451), Scientific Research Foundation of Wuhan Institute of Technology (K201713).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Dahl, R., Norouzi, M., Shlens, J.: Pixel recursive super resolution. arXiv preprint arXiv:1702.00783 (2017)
Sønderby, C.K., Caballero, J., Theis, L., Shi, W., Huszár, F.: Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490 (2016)
Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint (2016)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
Bulat, A., Tzimiropoulos, G.: Super-fan: integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. arXiv preprint arXiv:1712.02765 (2017)
Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report 07–49, University of Massachusetts, Amherst, October 2007 (2007)
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv preprint arXiv:1701.07875 (2017)
Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2813–2821. IEEE (2017)
Berthelot, D., Schumm, T., Metz, L.: Began: boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717 (2017)
Chen, Z., Tong, Y.: Face super-resolution through wasserstein gans. arXiv preprint arXiv:1705.02438 (2017)
Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 184–199. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10593-2_13
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)
Zhu, S., Liu, S., Loy, C.C., Tang, X.: Deep cascaded bi-network for face hallucination. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 614–630. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_37
Song, Y., Zhang, J., He, S., Bao, L., Yang, Q.: Learning to hallucinate face images via component generation and enhancement. arXiv preprint arXiv:1708.00223 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, p. 3 (2017)
Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, vol. 1, p. 3 (2017)
Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. arXiv preprint arXiv:1802.08797 (2018)
Tong, T., Li, G., Liu, X., Gao, Q.: Image super-resolution using dense skip connections. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 4809–4817. IEEE (2017)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43
Wu, B., Duan, H., Liu, Z., Sun, G.: Srpgan: perceptual generative adversarial network for single image super resolution. arXiv preprint arXiv:1712.05927 (2017)
Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV) (2015)
Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011)
Sheikh, H.R., Bovik, A.C.: Image information and visual quality. IEEE Trans. Image process. 15(2), 430–444 (2006)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, X. et al. (2018). Enhanced Discriminative Generative Adversarial Network for Face Super-Resolution. In: Hong, R., Cheng, WH., Yamasaki, T., Wang, M., Ngo, CW. (eds) Advances in Multimedia Information Processing – PCM 2018. PCM 2018. Lecture Notes in Computer Science(), vol 11165. Springer, Cham. https://doi.org/10.1007/978-3-030-00767-6_41
Download citation
DOI: https://doi.org/10.1007/978-3-030-00767-6_41
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00766-9
Online ISBN: 978-3-030-00767-6
eBook Packages: Computer ScienceComputer Science (R0)