Abstract
There is an increasing concern of face privacy protection along with the wide application of big media data and social networks due to free online data release. Although some pioneering works obtained some achievements, they are not sufficient enough for sanitizing the sensitive identity information. In this paper, we propose a generative approach to de-identify face images yet preserving the non-sensitive information for data reusability. To ensure a high privacy level, we introduce a large-margin model for the synthesized new identities by keeping a safe distance with both the input identity and existing identities. Besides, we show that our face de-identification operation follows the \(\epsilon \)-differential privacy rule which can provide a rigorous privacy notion in theory. We evaluate the proposed approach using the vggface dataset and compare with several state-of-the-art methods. The results show that our approach outperforms previous solutions for effective face privacy protection while preserving the major utilities.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Agarwal, S., et al.: Protecting world leaders against deep fakes. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 38ā45 (2019)
Ayala-Rivera, V., et al.: A systematic comparison and evaluation of k-anonymization algorithms for practitioners. Trans. Data Privacy 7(3), 337ā370 (2014)
Bao, J., Chen, D., Wen, F., Li, H., Hua, G.: Towards open-set identity preserving face synthesis. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6713ā6722 (2018)
Bhattarai, B., Mignon, A., Jurie, F., Furon, T.: Puzzling face verification algorithms for privacy protection. In: The Proceedings of IEEE International Workshop on Information Forensics and Security (WIFS), pp. 66ā71 (2014)
Bitouk, D., et al.: Face swapping: automatically replacing faces in photographs. In: ACM Transactions on Graphics (ToG). vol. 27, p. 39 (2008)
Brkic, K., et al.: I know that person: generative full body and face de-identification of people in images. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1319ā1328. IEEE (2017)
Cao, Q., et al.: Vggface2: a dataset for recognising faces across pose and age. In: IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 67ā74 (2018)
Chatzikokolakis, K., et al.: Broadening the scope of differential privacy using metrics. Privacy Enhancing Technologies, pp. 82ā102 (2013)
Du, L., Yi, M., Blasch, E., Ling, H.: Garp-face: balancing privacy protection and utility preservation in face de-identification. In: IEEE International Joint Conference on Biometrics, pp. 1ā8 (2014)
Fan, L.: Image pixelization with differential privacy. In: IFIP Annual Conference on Data and Applications Security and Privacy (2018)
Fan, L.: image obfuscation with quantifiable privacy. In: CV-COPS (2019)
Korshunov, P., Ebrahimi, T.: Using face morphing to protect privacy. In: IEEE International Conference on Advanced Video and Signal Based Surveillance, pp. 208ā213 (2013)
Kuang, Z., Li, Z., Lin, D., Fan, J.: Automatic privacy prediction to accelerate social image sharing. In: IEEE Third International Conference on Multimedia Big Data (BigMM), pp. 197ā200. IEEE (2017)
Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4681ā4690 (2017)
Li, T., Lin, L.: Anonymousnet: natural face de-identification with measurable privacy. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2019)
Majumdar, P., et al.: Evading face recognition via partial tampering of faces. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2019)
Mcpherson, R., et al.: Defeating image obfuscation with deep learning. In: arXiv:1609.00408v2 (2016)
Meden, B., et al.: k-same-net: k-anonymity with generative deep neural networks for face deidentification. Entropy 20(1), 60 (2018)
Mirza, M., Osindero, S.: Conditional generative adversarial nets (2014)
Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv: Learning (2014)
Newton, E.M., Sweeney, L., Malin, B.: Preserving privacy by de-identifying face images. IEEE Trans. Knowl. Data Eng. (TKDE) 17(2), 232ā243 (2005)
Oh, S.J., et al.: Faceless person recognition: privacy implications in social media. In: European Conference on Computer Vision (ECCV), pp. 19ā35 (2016)
van den Oord, A., et al.: Parallel wavenet: fast high-fidelity speech synthesis. In: Proceedings of Machine Learning Research, pp. 3918ā3926 (2018)
Radford, A., et al.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434 (2015)
Ren, Z., Jae Lee, Y., Ryoo, M.S.: Learning to anonymize faces for privacy preserving action detection. In: European Conference on Computer Vision (ECCV), pp. 620ā636 (2018)
Ribaric, S., Ariyaeeinia, A., Pavesic, N.: De-identification for privacy protection in multimedia content: a survey. Signal Process. Image Commun. 47, 131ā151 (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Sun, Z., et al.: Distinguishable de-identified faces. In: IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 4, pp. 1ā6 (2015)
Sweeney, L.: k-anonymity: a model for protecting privacy. Int. J. Uncertainty, Fuzziness Knowl.-Based Syst. 10(05), 557ā570 (2002)
Wu, Y., Yang, F., Ling, H.: Privacy-protective-gan for face de-identification. J. Comput. Sci. Technol. (JCST) 31(1), 47ā60 (2019)
Yang, X., Li, Y., Lyu, S.: Exposing deep fakes using inconsistent head poses. In: The Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8261ā8265 (2019)
Yuan, L., et al.: Privacy-preserving photo sharing based on a secure JPEG. In: IEEE Conference on Computer Communications Workshops, pp. 185ā190 (2015)
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grant No. 61806063, 61772161, 61622205. The authors would like to thank the reviewers who have provided insightful comments and valuable suggestions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Guo, Z., Liu, H., Kuang, Z., Nakashima, Y., Babaguchi, N. (2020). Privacy Sensitive Large-Margin Model for Face De-Identification. In: Zhang, H., Zhang, Z., Wu, Z., Hao, T. (eds) Neural Computing for Advanced Applications. NCAA 2020. Communications in Computer and Information Science, vol 1265. Springer, Singapore. https://doi.org/10.1007/978-981-15-7670-6_40
Download citation
DOI: https://doi.org/10.1007/978-981-15-7670-6_40
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-7669-0
Online ISBN: 978-981-15-7670-6
eBook Packages: Computer ScienceComputer Science (R0)