Skip to main content

Advertisement

Log in

A novel privacy protection approach with better human imperceptibility

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Our generation is quite obsessed with technology and we like to share our personal information such as photos and videos on the internet via different social networking websites i.e. Facebook, Snapchat, Instagram, etc. Therefore, it becomes easier for others to breach our privacy and harm us in a direct or indirect way. Now, computerized systems have advanced due to the improvements in Machine Learning (ML) algorithms and Artificial Intelligence (AI). These algorithms can extract sensitive information such as face attributes, text information, etc. from images or videos and can be used for privacy breaching. In this paper, we propose a novel privacy protection method by adding intelligent noise to the image while preserving image aesthetics and attributes. We determine multiple attributes for an image such as baldness, smiling, gender, etc. and we intelligently add noise to particular regions of the image that define a particular attribute using the visual explanation technique i.e. GradCam++, thereby preserving the other attributes. The addition of noise is based on the idea of Fast Gradient Sign Method (FGSM) that maximizes the gradients of the loss of an input image to create a new adversarial image. We integrate FGSM adversarial image and GradCam++ output to affect particular attributes only and hence keeping the image human imperceptible. The experiment results show that our attack outperforms the existing attacks including naive FGSM, Projected Gradient Descent (PGD), Momentum Iterative Method (MIM), Shadow Attack (SA), and Fast Minimum Norm (FMN) in terms of preserving attributes and image visual quality, when evaluated on CelebA dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Elsayed GF, Shankar S, Cheung B, Papernot N, Kurakin A, Goodfellow I, Sohl-Dickstein J. Adversarial examples that fool both computer vision and time-limited humans. In: Proceedings of the international conference on neural information processing systems. NIPS’18, pp 3914–3924

  2. Kurakin A, Goodfellow I, Bengio S (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236

  3. Moosavi-Dezfooli S-M, Fawzi A, Fawzi O, Frossard P (2017) Universal adversarial perturbations

  4. Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) DeepFool: a simple and accurate method to fool deep neural networks

  5. Xie C, Wang J, Zhang Z, Zhou Y, Xie L, Yuille A (2017) Adversarial Examples for Semantic Segmentation and Object Detection

  6. Chhabra S, Singh R, Vatsa M, Gupta G (2018) Anonymizing k-facial attributes via adversarial perturbations. arXiv preprint arXiv:1805.09380

  7. Cheung S-CS, Wildfeuer H, Nikkhah M, Zhu X, Tan W (2018) Learning sensitive images using generative models. In: 2018 25th IEEE international conference on image processing (ICIP), pp 4128–4132

  8. Fong RC, Vedaldi A (2017) Interpretable explanations of black boxes by meaningful perturbation. In: Proceedings of the IEEE international conference on computer vision, pp 3429–3437

  9. Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP), pp 582–597

  10. Zhang C, Ye Z, Wang Y, Yang Z (2018) Detecting adversarial perturbations with saliency. In: 2018 IEEE 3rd international conference on signal and image processing (ICSIP), pp 271–275

  11. Chattopadhay A, Sarkar A, Howlader P, Balasubramanian VN (2018) Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE winter conference on applications of computer vision (WACV), pp 839–847

  12. Goodfellow IJ, Shlens J, Szegedy, C (2015) Explaining and Harnessing Adversarial Examples

  13. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083

  14. Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185–9193

  15. Ghiasi A, Shafahi A, Goldstein T (2020) Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates. arXiv preprint arXiv:2003.08937

  16. Pintor M, Roli F, Brendel W, Biggio B (2021) Fast minimum-norm adversarial attacks through adaptive norm constraints. Advances in Neural Information Processing Systems. 34: 20052–20062

  17. Boyle M, Neustaedter C, Greenberg S (2009) Privacy factors in video-based media spaces. In: Media space 20+ years of mediated Life, pp 97–122

  18. Büscher M, Perng S-Y, Liegl M (2019) Privacy, security, and liberty: Ict in crises. In: Censorship, surveillance, and privacy: concepts, methodologies,tools, and applications, pp 199–217

  19. Çiftçi S, Akyüz AO, Ebrahimi T (2017) A reliable and reversible image privacy protection based on false colors. IEEE transactions on multimedia 20(1): 68–81

  20. Du L, Zhang W, Fu H, Ren W, Zhang X (2019) An efficient privacy protection scheme for data security in video surveillance. Journal of visual communication and image representation. 59: 347–362

  21. Siddiqui S, Singh T, et al (2016) Social media its impact with positive and negative aspects. International journal of computer applications technology and research 5(2): 71–75

  22. Wang J, Amos B, Das A, Pillai P, Sadeh N, Satyanarayanan M (2017) A scalable and privacy-aware iot service for live video analytics. In: Proceedings of the 8th ACM on Multimedia Systems Conference, pp 38–49

  23. Sharif M, Bhagavatula S, Bauer L, Reiter MK (2016) Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 Acm sigsac conference on computer and communications security, pp 1528–1540

  24. Juefei-Xu F, Boddeti VN, Savvides M (2018) Perturbative neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3310–3318

  25. Mopuri KR, Ojha U, Garg U, Babu RV (2018) Nag: Network for adversary generation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 742–751

  26. Xiao C, Li B, Zhu J-Y, He W, Liu M, Song D (2018) Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610

  27. Mirjalili V, Raschka S, Namboodiri A, Ross A (2018) Semi-adversarial networks: Convolutional autoencoders for imparting privacy to face images. In: 2018 International conference on biometrics (ICB), pp 82–89

  28. Rezaei A, Xiao C, Gao J, Li B (2018) Protecting sensitive attributes via generative adversarial networks. arXiv preprint arXiv:1812.10193

  29. Wu Z, Wang Z, Wang Z, Jin H (2018) Towards privacy-preserving visual recognition via adversarial training: A pilot study. In: Proceedings of the european conference on computer vision (ECCV), pp 606–624

  30. Patil S, Varadarajan V, Walimbe D, Gulechha S, Shenoy S, Raina A, Kotecha K (2021) Improving the robustness of ai-based malware detection using adversarial machine learning. Algorithms 14(10):297

    Article  Google Scholar 

  31. Kastaniotis D, Ntinou I, Tsourounis D, Economou G, Fotopoulos S (2018) Attention-aware generative adversarial networks (ata-gans). In: 2018 IEEE 13th image, video, and multidimensional signal processing workshop (IVMSP), pp 1–5

  32. Yu F, Dong Q, Chen X (2018) Asp: A fast adversarial attack example generation framework based on adversarial saliency prediction. arXiv preprint arXiv:1802.05763

  33. Shen Z, Fan S, Wong Y, Ng T-T, Kankanhalli M (2019) Humanimperceptible privacy protection against machines. In: Proceedings of the 27th ACM international conference on multimedia, pp 1119–1128

  34. Simonyan K, Zisserman A (2014) Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556

  35. Liu Z, Luo P, Wang X, Tang X (2018) Large-scale celebfaces attributes (celeba) dataset. Retrieved August 15(2018), 11

  36. Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A (2016) Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2921–2929

  37. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision, pp 618–626

Download references

Acknowledgements

This research work is supported by the IIT Ropar under ISIRD grant 9-231/2016/IIT-RPR/1395 and DST under CSRI grant DST/CSRI/2018/234

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Puneet Goyal.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Kapil Rana, Aman Pandey, Parth Goyal, Gurinder Singh are contributed equally to this work.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rana, K., Pandey, A., Goyal, P. et al. A novel privacy protection approach with better human imperceptibility. Appl Intell 53, 21788–21798 (2023). https://doi.org/10.1007/s10489-023-04592-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-04592-7

Keywords

Navigation