Skip to main content

Facial Occlusion Detection and Reconstruction Using GAN

  • Conference paper
  • First Online:
Computer Vision and Image Processing (CVIP 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1377))

Included in the following conference series:

Abstract

We developed an architecture considering the problem of the presence of various objects like glasses, scarfs, masks, etc. on the face. These types of occluders greatly impact recognition accuracy. Occluders are simply treated as a source of noise or unwanted objects in an image. Occlusion affects the appearance of the image significantly. In this paper, a novel architecture for the occlusion removal of specific types like glasses is dealt with. A synthetic mask for this type of occlusion is created and facial landmarks are generated which are further used for image completion method is used to complete the image after removing the occluder. We have trained the model on celebA dataset, which is available publicly. The experimental results show that the proposed architecture for the occlusion removal effectively worked for the faces covered with glasses or scarfs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dhamecha, T., Nigam, A., Singh, R., Vatsa, M.: Disguise detection and face recognition in visible and thermal spectrums. In: 2013 International Conference on Biometrics (ICB), pp. 1–8 (2013). https://doi.org/10.1109/ICB.2013.6613019

  2. Dhamecha, T.I., Singh, R., Vatsa, M., Kumar, A.: Recognizing disguised faces: human and machine evaluation. PLOS ONE 9(7), 1–16 (2014). https://doi.org/10.1371/journal.pone.0099212

    Article  Google Scholar 

  3. Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Image completion using planar structure guidance. ACM Trans. Graph. 33(4), 1–10 (2014). https://doi.org/10.1145/2601097.2601205

    Article  Google Scholar 

  4. Jo, Y., Park, J.: SC-FEGAN: face editing generative adversarial network with user’s sketch and color. CoRR abs/1902.06838 (2019). http://arxiv.org/abs/1902.06838

  5. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2014). http://arxiv.org/abs/1412.6980

  6. Kumar, A., Chellappa, R.: Disentangling 3D pose in A dendritic CNN for unconstrained 2D face alignment. CoRR abs/1802.06713 (2018). http://arxiv.org/abs/1802.06713

  7. Li, Y., Liu, S., Yang, J., Yang, M.: Generative face completion. CoRR abs/1704.05838 (2017). http://arxiv.org/abs/1704.05838

  8. Lin, D., Tang, X.: Quality-driven face occlusion detection and recovery, pp. 1–7 (2007). https://doi.org/10.1109/CVPR.2007.383052

  9. Liu, G., Reda, F.A., Shih, K.J., Wang, T., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. CoRR abs/1804.07723 (2018). http://arxiv.org/abs/1804.07723

  10. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV) (2015)

    Google Scholar 

  11. Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z.: Multi-class generative adversarial networks with the L2 loss function. CoRR abs/1611.04076 (2016). http://arxiv.org/abs/1611.04076

  12. Nazeri, K., Ng, E., Joseph, T., Qureshi, F.Z., Ebrahimi, M.: Edgeconnect: Generative image inpainting with adversarial edge learning. CoRR abs/1901.00212 (2019). http://arxiv.org/abs/1901.00212

  13. Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: Feature learning by inpainting. CoRR abs/1604.07379 (2016)

    Google Scholar 

  14. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks (2016)

    Google Scholar 

  15. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. CoRR abs/1505.04597 (2015). http://arxiv.org/abs/1505.04597

  16. Tang, Y.: Robust Boltzmann machines for recognition and denoising. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), CVPR 2012, pp. 2264–2271. IEEE Computer Society, USA (2012)

    Google Scholar 

  17. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  18. Winkler, S., Mohandas, P.: The evolution of video quality measurement: from PSNR to hybrid metrics. IEEE Trans. Broadcast. 54(3), 660–668 (2008). https://doi.org/10.1109/TBC.2008.2000733

    Article  Google Scholar 

  19. Wright, J., Yang, A., Ganesh, A., Sastry, S., Yu, L.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell 31, 210–227 (2009). https://doi.org/10.1109/TPAMI.2008.79

    Article  Google Scholar 

  20. Wu, W., Qian, C., Yang, S., Wang, Q., Cai, Y., Zhou, Q.: Look at boundary: A boundary-aware face alignment algorithm. CoRR abs/1805.10483 (2018). http://arxiv.org/abs/1805.10483

  21. Xiao, S., et al.: Recurrent 3D–2D dual learning for large-pose facial landmark detection. In: The IEEE International Conference on Computer Vision (ICCV) (2017)

    Google Scholar 

  22. Yamauchi, H., Haber, J., Seidel, H.P.: Image restoration using multiresolution texture synthesis and image inpainting, pp. 120–125 (2003). https://doi.org/10.1109/CGI.2003.1214456

  23. Yang, Y., Guo, X., Ma, J., Ma, L., Ling, H.: Lafin: generative landmark guided face inpainting (2019)

    Google Scholar 

  24. Yang, Y., Ramanan, D.: Articulated human detection with flexible mixtures of parts. IEEE Trans. Pattern Anal. Mach. Intell. 35, 2878–2890 (2013). https://doi.org/10.1109/TPAMI.2012.261

    Article  Google Scholar 

  25. Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. CoRR abs/1801.07892 (2018). http://arxiv.org/abs/1801.07892

  26. Zhao, F., Feng, J., Zhao, J., Yang, W., Yan, S.: Robust lstm-autoencoders for face de-occlusion in the wild. CoRR abs/1612.08534 (2016). http://arxiv.org/abs/1612.08534

  27. Zheng, C., Cham, T., Cai, J.: Pluralistic image completion. CoRR abs/1903.04227 (2019). http://arxiv.org/abs/1903.04227

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Diksha Khas or Sumit Kumar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Khas, D., Kumar, S., Singh, S.K. (2021). Facial Occlusion Detection and Reconstruction Using GAN. In: Singh, S.K., Roy, P., Raman, B., Nagabhushan, P. (eds) Computer Vision and Image Processing. CVIP 2020. Communications in Computer and Information Science, vol 1377. Springer, Singapore. https://doi.org/10.1007/978-981-16-1092-9_22

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-1092-9_22

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-1091-2

  • Online ISBN: 978-981-16-1092-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics