Skip to main content
Log in

SE-DCGAN: a New Method of Semantic Image Restoration

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Image restoration is a technique that utilizes the edge of a corrupted area. The information content of the damaged information area is inferred based on the remaining information of these images, and then the damaged area is filled to achieve image restoration. To solve the problem of image occlusion in practical applications, a squeeze-excitation network-deep convolution generative adversarial network (SE-DCGAN) was proposed. First, many new sharp images are generated using SE-DCGAN. Then, in the generated image, the most similar image is found based on the context semantics of the original image and the encoding of the unfilled portion to fill the original image. SE-DCGAN introduces maxout activation with powerful fitting capabilities to improve image generation efficiency and avoid image generation redundancy. Experiments based on three datasets of CelebA, Street View House Number and anime avatars, showed that our method successfully predicted a large number of missing regions. This method improves the recognition rate of occluded images, produces high-quality perceptual results, and is flexible enough to handle a variety of masks or obstructions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Bertalmio M, Sapiro G, Caselles V.  Image inpainting. Conference on Computer Graphics and Interactive Techniques. ACM Press /Addison-Wesley Publishing Company; 2002. p. 417–424.

  2. ElSayed A, Mahmood A, Sobh T.  Effect of super resolution on high dimensional features for unsupervised face recognition in the wild, 2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC; 2017. p. 1–5.

  3. Dhamija J, Choudhury T, Kumar P, Rathore YS.  An advancement towards efficient face recognition using live video feed: for the future, 3rd International Conference on Computational Intelligence and Networks (CINE). 2017. p. 53–56.

  4. Khadatkar A, Khedgaonkar R, Patnaik KS. Occlusion invariant face recognition system, 2016 World Conference on Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave), Coimbatore; 2016. p. 1–4.

  5. Li XX, Hao PY, He L, Feng YJ. Image gradient orientations embedded structural error coding for face recognition with occlusion[J]. J Ambient Intell Humaniz Comput. 2019;3(14):1–19.

    Google Scholar 

  6. Papyan V, Elad M. Multi-scale patch-based image restoration [J]. IEEE Trans Image Process. 2015;25(1):249–61.

    Article  MathSciNet  Google Scholar 

  7. Li C, Zhang F. Underwater image restoration based on improved background light estimation and automatic white balance, 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). 2018. p. 1–5.

  8. Huang J-B, Kang SB, Ahuja N, Kopf J. Image completion using planar structure guidance. ACM Trans Graph. 2014;33(4):129.

    Google Scholar 

  9. Nicole D, Enrico V, Federica M, Stefano T. Occlusion detection and restoration techniques for 3D face recognition: a literature review. Mach Vis Appl. 2018;29(3):789–813.

    Google Scholar 

  10. Bertalmio M, Vese L, Sapiro G, Osher S. Simultaneous structure and texture image inpainting (TIP). 2003;12(8):882–9.

    Google Scholar 

  11. Zoran D, Weiss  Y. From learning models of natural image patches to whole image restoration. (ICCV). 2011. p. 479–486.

  12. Krishnaveni B, Sridhar S. Approximation algorithm based on greedy approach for face recognition with partial occlusion. Multimed Tools Appl. 2019;78(19):27511–31.

    Article  Google Scholar 

  13. Ren JS, Xu L, Yan Q, Sun W. Shepard convolutional neural networks. Curran Associates (NIPS). 2015. p. 901–909.

  14. Iizuka S, Simo-Serra E, Ishikawa H. Globally and locally consistent image completion[J]. ACM Trans Graph. 2017;36(4):1–14.

    Article  Google Scholar 

  15. Rezagholiradeh M, Haidar  MA, Reg-Gan. Semi-supervised learning based on generative adversarial networks for regression, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2018. p. 2806–2810.

  16. Arjovsky M, Chintala S, Bottou L. Wasserstein gan [EB/OL] [2017-01-07].  2017. https://arxiv.org/abs/1701.07875.

  17. Zhao S, Li J, Huo Q. Removing ring artifacts in Cbct images via generative adversarial network, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2018. p. 1055–1059.

  18. Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. (ICLR). 2015. http://arxiv.org/abs/1511.06434.

  19. Liu X, Xie C, Kuang H , Ma, X. Face aging simulation with deep convolutional generative adversarial networks, 10th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA). IEEE Computer Society. 2018.

  20. Yeh R, Chen Chen, Lim T Y, et al.  Semantic image inpainting with perceptual and contextual losses [C] Proc of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017. p. 6882–6890.

  21. Yijun L, Sifei L, Jimei Y, et al Generative face completion Proc of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017. p. 5892–5900.

  22. Lahasan B, Lutfi SL, San-Segundo, . A survey on techniques to handle face recognition challenges: occlusion, single sample per subject and expression. Artif Intell Rev. 2019;52(2):949–79.

    Article  Google Scholar 

  23. Sherin A, Abbott F,  Felsberg AL, Per-Erik M. Facial emotion recognition with varying poses and/or partial occlusion Using multi-stage progressive transfer learning (SCIA). 2019. p. 101–112.

  24. Pathak D, Krahenb P, Donahue J,Darrell T, Efros A. Context encoders: feature learning by inpainting. 2016. p. 121–135.

  25. Hu J, Shen L ,Albanie S. Squeeze-and-excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2014. p. 1–1.

  26. Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. Advances in Neural Information Processing Systems: p. 2672–2680.

  27. Shen W, Wang  W. Node identification in wireless network based on convolutional neural network, 14th International Conference on Computational Intelligence and Security(CIS), 2018. p. 238–241.

  28. Yip B, Towner R, Kling T. Image pre-processing using OpenCV library on MORPH-II face database. Comput Sci. 2018. p. 1–6.

Download references

Funding

This work was supported by the Fundamental Research Funds of Central Universities [No. 2019XKQYMS87].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinzheng Xu.

Ethics declarations

Informed Consent

Informed consent was not required because no human or animals were involved.

Human and Animal Rights

This article does not contain any studies with human or animal subjects performed by any of the authors.

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, F., Wang, X., Sun, T. et al. SE-DCGAN: a New Method of Semantic Image Restoration. Cogn Comput 13, 981–991 (2021). https://doi.org/10.1007/s12559-021-09877-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-021-09877-y

Keywords

Navigation