Skip to main content
Log in

Privacy preservation through facial de-identification with simultaneous emotion preservation

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Due to the availability of low-cost internet and other data transmission media, a high volume of multimedia data get shared very quickly. Often, the identity of individuals gets revealed through images or videos without their consent, which affects their privacy. Since face is the only biometric feature that reveals the most identifiable characteristics of a person in an image or a video frame, the need for the development of an effective face de-identification algorithm for privacy preservation cannot be over-emphasized. Existing solutions to face de-identification are either non-formal or are unable to obfuscate identifiable features completely. In this paper, we propose an automated face de-identification algorithm that takes as input a facial image and generates a new face that preserves the emotion and non-biometric facial attributes of a target face. We consider a proxy set of a large collection of artificial faces generated by StyleGAN and select the most appropriate face from the proxy set that has a facial expression and pose similar to that of the target face. The faces in the proxy set are artificially generated, and hence the face selected from this set is completely anonymous. To retain the non-biometric attributes of the target face, we employ a generative adversarial network (GAN) with a suitable loss function that fuses the non-biometric attributes of the target face with the face selected from the proxy set to obtain the final de-identified face. Experimental results emphasize the superiority of our approach over state-of-the-art face de-identification methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Wu, Y., Yang, F., Ling, H.: Privacy-protective-GAN for face de-identification (2018). arXiv:1806.08906

  2. Ribaric, S., Ariyaeeinia, A., Pavesic, N.: De-identification for privacy protection in multimedia content: a survey. Signal Process. Image Commun. 47, 131–151 (2016)

    Article  Google Scholar 

  3. Frome, A., Cheung, G., Abdulkader, A., Zennaro, M., Wu, B., Bissacco, A., Adam, H., Neven, H., Vincent, L.: Large-scale privacy protection in Google street view. In: Proceedings of the International Conference on Computer Vision, pp. 2373–2380 (2009)

  4. Sohn, H., De Neve, W., Ro, Y.M.: Privacy protection in video surveillance systems: analysis of subband-adaptive scrambling in JPEG XR. IEEE Trans Circuits Syst Video Technol 21(2), 170–177 (2011)

    Article  Google Scholar 

  5. Schulz, L.E., Bonawitz, E.B.: Serious fun: preschoolers engage in more exploratory play when evidence is confounded. Dev Psychol 43(4), 1045 (2007)

    Article  Google Scholar 

  6. Mosaddegh, S., Simon, L., Jurie, F.: Photorealistic face de-identification by aggregating donors’ face components. In: Proceeding of the Asian Conference on Computer Vision, pp. 159–174. Springer (2014)

  7. Tejaswini, G., Venkataramana, T.: A novel reversible de-identification approach for lossless image compression based on reversible watermarking mechanism based on obfuscation process. Int. J. Eng. Comput. Sci. 4(10), 14817–14823 (2015)

  8. Gross, R., Sweeney, L., De la Torre, F., Baker, S.: Model-based face de-identification. In: Proceedings of the Conference on Computer Vision and Pattern Recognition Workshop, pp. 161–161. IEEE (2006)

  9. Gross, R., Airoldi, E., Malin, B., Sweeney, L.: Integrating utility into face de-identification. In: Proceedings of the International Workshop on Privacy Enhancing Technologies, pp. 227–242. Springer (2005)

  10. Gross, R., Sweeney, L., De La Torre, F., Baker, S.: Semi-supervised learning of multi-factor models for face de-identification. In: Proceedings of the Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2008)

  11. Samarzija, B., Ribaric, S.: An approach to the de-identification of faces in different poses. In: Proceedings of the 37th International Convention on Information and Communication Technology, Electronics and Microelectronics, pp. 1246–1251. IEEE (2014)

  12. Meng, L., Sun, Z., Ariyaeeinia, A., Bennett, K.L.: Retaining expressions on de-identified faces. In: Proceedings of the 37th International Convention on Information and Communication Technology, Electronics and Microelectronics, pp. 1252–1257. IEEE (2014)

  13. Sun, Z., Meng, L., Ariyaeeinia, A.: Distinguishable de-identified faces. In: Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, vol. 4, pp. 1–6. IEEE (2015)

  14. Du, L., Yi, M., Blasch, E., Ling, H.: GARP-face: balancing privacy protection and utility preservation in face de-identification. In: Proceedings of the IEEE International Joint Conference on Biometrics, pp. 1–8. IEEE (2014)

  15. Meden, B., Emeršič, Ž., Štruc, V., Peer, P.: k-Same-Net: k-Anonymity with generative deep neural networks for face deidentification. Entropy 20(1), 60 (2018)

    Article  Google Scholar 

  16. Meden, B., Mallı, R.C., Fabijan, S., Ekenel, H.K., Štruc, V., Peer, P.: Face deidentification with generative deep neural networks. IET Signal Process. 11(9), 1046–1054 (2017)

    Article  Google Scholar 

  17. Li, Y., Lyu, S.: De-identification without losing faces. In: Proceedings of the ACM Workshop on Information Hiding and Multimedia Security, pp. 83–88 (2019)

  18. He, Z., Zuo, W., Kan, M., Shan, S., Chen, X.: Arbitrary facial attribute editing: only change what you want (2017). CoRR, arXiv:1711.10678

  19. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks (2018). CoRR, arXiv:1812.04948

  20. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the Computer Vision and Pattern Recognition (2017)

  21. Zhang, L., Ji, Y., Lin, X. and Liu, C.: Style transfer for anime sketches with enhanced residual U-Net and auxiliary classifier GAN. In: Proceedings of the 4th Asian Conference on Pattern Recognition, pp. 506–511. IEEE (2017)

  22. Kancharagunta, K.B., Dubey, S.R.: CSGAN: cyclic-synthesized generative adversarial networks for image-to-image transformation (2019). arXiv:1901.03554

  23. Cao, Y., Zhou, Z., Zhang, W. and Yu, Y.: Unsupervised diverse colorization via generative adversarial networks. In: Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 151–166. Springer (2017)

  24. Arriaga, O., Valdenegro-Toro, M., Plöger, P.: Real-time convolutional neural networks for emotion and gender classification (2017). arXiv:1710.07557

  25. Zheng, X., Cai, D., He, X., Ma, W.Y., Lin, X.: Locality preserving clustering for image database. In: Proceedings of the 12th Annual ACM International Conference on Multimedia, pp. 885–891. ACM (2004)

  26. Biswas, S.K., Milanfar, P.: Laplacian object: one-shot object detection by locality preserving projection. In: IEEE International Conference on Image Processing, pp. 4062–4066. IEEE (2014)

  27. Gualberto, A.: Regressor-face pose (2018). https://github.com/arnaldog12/Deep-Learning/tree/master/problems

  28. Parkhi, O.M., Vedaldi, A., Zisserman, A., et al.: Deep face recognition. In: Proceedings of the British Machine Vision Conference, pp. 1–12 (2015)

  29. Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H., Hawk, S.T., Van Knippenberg, A.D.: Presentation and validation of the radboud faces database. Cogn. Emot. 24(8), 1377–1388 (2010)

    Article  Google Scholar 

  30. Messer, K., Matas, J., Kittler, J., Luettin, J. and Maitre, G.: XM2VTSDB: the extended M2VTS database. In: Proceedings of the 2nd International Conference on Audio and Video-Based Biometric Person Authentication, vol. 964, pp. 965–966 (1999)

  31. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV), pp. 3730–3738 (2015)

Download references

Acknowledgements

The authors thank NVIDIA for supporting their research with a Titan Xp GPU, and everyone who participated in the survey for computing the opinion scores.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pratik Chattopadhyay.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Agarwal, A., Chattopadhyay, P. & Wang, L. Privacy preservation through facial de-identification with simultaneous emotion preservation. SIViP 15, 951–958 (2021). https://doi.org/10.1007/s11760-020-01819-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-020-01819-9

Keywords

Navigation