Skip to main content

Advertisement

ISWP: Novel high-fidelity adversarial examples generated by incorporating invisible and secure watermark perturbations

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Invisible watermarking is widely used for tracking and holding accountable unauthorized usage of copyrighted content, but it does not prevent attackers from obtaining illegal access to digital assets. Consequently, user privacy and security are significantly compromised. Recent investigations have revealed that adversarial attacks are capable of misleading state-of-the-art deep learning models by inducing incorrect classifications. The generated adversarial examples can dramatically mitigate malicious access to protected content. To integrate invisible watermarking with adversarial attacks in a unified task, we explore the potential of creating meaningful perturbations in adversarial examples that combine adversarial attacks with secure watermark perturbations. A novel method called ISWP (invisible and secure watermark perturbations) for embedding meaningful perturbations into input images is proposed in this paper to accomplish both adversarial attacks and copyright protection. ISWP employs the discrete wavelet transform (DWT) and basin hopping (BH) in its adversarial attack process, resulting in the creation of imperceptible adversarial watermark perturbations. Furthermore, encryption technologies are incorporated into the adversarial attack process to safeguard against unauthorized malicious access. The experimental results show that the generated adversarial examples exhibit benign visual performance while achieving remarkable attack capacity and robustness on different DNN models, and the embedded watermarks are extracted as powerful tools for copyright certification, which demonstrates their effectiveness as a protection mechanism for private content.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  1. Tang Y, Wang C, Xiang S, Cheung YM (2024) Reversible watermarking scheme using attack-simulation-based adaptive normalization and embedding. IEEE Trans Inf Forensics Secur 19:4114–4129. https://doi.org/10.1109/TIFS.2024.3372811

    Article  MATH  Google Scholar 

  2. Taj T, Sarkar M (2023) A survey on embedding iris biometric watermarking for user authentication. Cloud Comput Data Sci: 203–211

  3. Das S, Namasudra S (2022) A lightweight and anonymous mutual authentication scheme for medical big data in distributed smart healthcare systems. IEEE/ACM Trans Comput Biol Bioinf 99:1–12

    MATH  Google Scholar 

  4. Shen W, Rong J, Liu Y, Zhao Y (2023) IrisMarkNet: Iris feature watermarking embedding and extraction network for image copyright protection. Appl Intell 53(9):9992–10008

    Article  MATH  Google Scholar 

  5. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. Comput Sci Arxiv Preprint Arxiv 1412 6572. https://doi.org/10.48550/arXiv.1412.6572

  6. Guo S, Li X, Zhu P et al (2024) MixCam-attack: boosting the transferability of adversarial examples with targeted data augmentation. Inf Sci 657:119918

    Article  Google Scholar 

  7. Moosavidezfooli SM, Fawzi A, Frossard P (2016) DeepFool: a simple and accurate scheme to fool deep neural networks. In: Proc. of the IEEE Conf. on CVPR, Las Vegas, pp 2574–2582

  8. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  9. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826

  10. Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K (2016) SqueezeNet: alexnet-level accuracy with 50x fewer parameters and < 0.5 MB model size. arXiv preprint arXiv:1602.07360

  11. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. Computer, Science

  12. Moosavi-dezfooli SM, Fawzi A, Fawzi O (2017) Universal adversarial perturbations, in Proc. CVPR, pp. 1765–1773

  13. Carlini N, Wagner D (2017) Towards Evaluating the Robustness of Neural Networks. In: 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, pp. 39–57, https://doi.org/10.1109/SP.2017.49

  14. Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), IEEE, pp. 582–597

  15. Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828–841

    Article  MATH  Google Scholar 

  16. Wieland B, Rauber J, Bethge M (2017) Decision-based adversarial attacks: reliable attacks against black-box machine learning models. Proc. https://doi.org/10.48550/arXiv.1712.04248

    Article  Google Scholar 

  17. Engstrom L, Tran B, Tsipras D, Schmidt L, Madry A (2019) Exploring the landscape of spatial robustness. In: International Conference on Machine Learning. pp 1802–1811

  18. Schott L, Rauber J, Bethge M, Brendel W (2018) Towards the first adversarially robust neural network model on MNIST. arxiv preprint arxiv:1805.09190. https://doi.org/10.48550/arXiv.1805.09190

  19. Xiang Y, Xu Y, Li Y, Ma W, Xuan Q, Liu Y (2020) Side-channel gray-ox attack for DNNs, circuits and systems II: Express briefs. IEEE Trans on 68(1):501–505

    MATH  Google Scholar 

  20. Jia X, Wei X, Cao X, Han X (2020) Adv-watermark: a novel watermark perturbation for adversarial examples. In: Proc. of the 28th ACM ICMR, pp. 1579–1587

  21. Liang J, Feng Z, Chen R, Liu X (2023) Embedded invisible watermark as adversarial example based on Basin-hopping improvement. Sci Inform 640:119037

    Article  Google Scholar 

  22. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 770–778

  23. Fawzi A, Frossard P (2015) Manitest: Are classifiers really invariant, computer science. arxiv preprint arxiv:1507.06535

  24. Kanbak C, Moosavi-Dezfooli SM, Frossard P (2018) Geometric robustness of deep networks: analysis and improvement. In: Proc. of the IEEE Conf. on CVPR, pp. 4441–4449

  25. Namasudra S (2022) A secure cryptosystem using DNA cryptography and DNA steganography for the cloud-based IoT infrastructure. Comput Electr Eng 104:108426

    Article  MATH  Google Scholar 

  26. Kumar P, Rahman M, Namasudra S et al (2023) Enhancing security of medical images using deep learning, chaotic map, and hash table. Mobile Netw Appl1–15

  27. Namasudra S (2018) Taxonomy of DNA-based security models. In Advances of DNA computing in cryptography. Chapman and Hall/CRC, pp 37–52. https://doi.org/10.1201/9781351011419-3

  28. Hosny KM, Magdi A, ElKomy O, Hamza HM (2024) Digital image watermarking using deep learning: a survey. Comput Sci Rev 53:pp100662

    Article  MathSciNet  Google Scholar 

  29. Yuan Z, Zhang X, Wang Z et al (2024) Semi-fragile neural network watermarking for content authentication and tampering localization. Expert Syst Appl 236:121315

    Article  Google Scholar 

  30. Anand A, Singh AK (2021) Watermarking techniques for medical data authentication: a survey. Multimedia Tools Appl 80(20):30165–30197

    Article  MATH  Google Scholar 

  31. Megías D, Mazurczyk W (2021) Kuribayashi M data hiding and its applications: digital watermarking and steganography. Applied Sciences 11(22):10928

    Article  MATH  Google Scholar 

  32. Mohanty SP, Sengupta A, Guturu P, Kougianos E (2017) Everything you want to know about watermarking from paper marks to hardware protection. IEEE Trans Consumer Electron 6(3):83–91

    Article  MATH  Google Scholar 

  33. Lin CC, Lee TL et al (2023) Fragile watermarking for tamper localization and self-recovery based on AMBTC and VQ. Electronics 12(2):415

    Article  MATH  Google Scholar 

  34. Hammami A, Ben Hamida A, Ben Amar C, Nicolas H (2024) Blind semi-fragile hybrid domain-based dual watermarking system for video authentication and tampering localization. Circuits Syst Signal Process 43(1):264–301

    Article  MATH  Google Scholar 

  35. Goli MS (2016) Naghsh A A comparative study of image-in-image steganography using three schemes of least significant bit. Bulletin de la Société Royale des Sciences de Liège 85(1):1465–1474

    Article  MATH  Google Scholar 

  36. Das N, Shanbhogue M, Chen S, Hohman F, Chen L, Kounavis ME, Chau DH (2017) Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:1705.02900. https://doi.org/10.48550/arXiv.1705.02900

  37. Alhaj A (2017) Combined DWT-DCT digital image watermarking. J Comput Sci 3(9):740–746

    MATH  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the fund of university industry cooperation project of Fujian province (Grant No.2024H6007) and fund of scientific and technological innovation of Fujian agriculture and Forestry University (Grant No.KFb22091XA).

Author information

Authors and Affiliations

Authors

Contributions

Jinchao Liang: Investigation, Methodology, Writing-original draft, Formal analysis. Yang Liu: Investigation, Resources, Writing-review&editing. Lu Gao: Conceptualization, Writing-original draft. Ze Zhang: Conceptualization, Validation, Writing-review&editing,. Xiaolong Liu: Conceptualization, Supervision, Writing-review&editing, Funding acquisition.

Corresponding author

Correspondence to Xiaolong Liu.

Ethics declarations

Ethical and informed consent for data used

The authors declare no conflict of interest.

Competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, J., Liu, Y., Gao, L. et al. ISWP: Novel high-fidelity adversarial examples generated by incorporating invisible and secure watermark perturbations. Appl Intell 55, 40 (2025). https://doi.org/10.1007/s10489-024-05917-w

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-05917-w

Keywords