Skip to main content

Improving Transferability Reversible Adversarial Examples Based on Flipping Transformation

  • Conference paper
  • First Online:
Data Science (ICPCSEE 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1879))

  • 299 Accesses

Abstract

Adding subtle perturbations to an image can cause the classification model to misclassify, and such images are called adversarial examples. Adversarial examples threaten the safe use of deep neural networks, but when combined with reversible data hiding (RDH) technology, they can protect images from being correctly identified by unauthorized models and recover the image lossless under authorized models. Based on this, the reversible adversarial example (RAE) is rising. However, existing RAE technology focuses on feasibility, attack success rate and image quality, but ignores transferability and time complexity. In this paper, we optimize the data hiding structure and combine data augmentation technology, which flips the input image in probability to avoid overfitting phenomenon on the dataset. On the premise of maintaining a high success rate of white-box attacks and the image’s visual quality, the proposed method improves the transferability of reversible adversarial examples by approximately 16% and reduces the computational cost by approximately 43% compared to the state-of-the-art method. In addition, the appropriate flip probability can be selected for different application scenarios.

This research work is partly supported by the National Natural Science Foundation of China (62172001), the Provincial Colleges Quality Project of Anhui Province (2020xsxxkc047) and the National Undergraduate Innovation and Entrepreneurship Training Program (202210357077).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Feng, D., Harakeh, A., Waslander, S.L., et al.: A review and comparative study on probabilistic object detection in autonomous driving. IEEE Trans. Intell. Transp. Syst. 23(8), 9961–9980 (2021)

    Google Scholar 

  2. Rivero-Hernandez, J., Morales-Gonzalez, A., Denis, L.G., et al.: Ordered weighted aggregation networks for video face recognition. Pattern Recogn. Lett. 146, 237–243 (2021)

    Google Scholar 

  3. Santos, T.I., Abel, A., Wilson, N., et al.: Speaker-independent visual speech recognition with the Inception V3 model. In: IEEE Spoken Language Technology Workshop (SLT), pp. 613–620. IEEE (2021)

    Google Scholar 

  4. Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. In: arXiv preprint arXiv:1312.6199 (2013)

  5. Eykholt, K., Evtimov, I., Fernandes, E., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern recognition, pp. 1625–1634 (2018)

    Google Scholar 

  6. Hou, D., Zhang, W., Liu, J., et al.: Emerging applications of reversible data hiding. In: Proceedings of the 2nd International Conference on Image and Graphics Processing, pp. 105–109 (2019)

    Google Scholar 

  7. Xie, C., Zhang, Z., Zhou, Y., et al.: Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2730–2739 (2019)

    Google Scholar 

  8. Yang, B., Zhang, H., Li, Z., et al.: Adversarial example generation method based on image flipping transform. J. Comput. Appl. 42(8), 2319 (2022)

    Google Scholar 

  9. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: arXiv preprint arXiv:1412.6572 (2014)

  10. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial Examples in the Physical World. In: Artificial Intelligence Safety and Security. Chapman and Hall/CRC, pp. 99–112 (2018)

    Google Scholar 

  11. Dong, Y., Liao, F., Pang, T., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

    Google Scholar 

  12. Madry, A., Makelov, A., Schmidt, L., et al.: Towards deep learning models resistant to adversarial attacks. In: arXiv preprint arXiv:1706.06083 (2017)

  13. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (sp). 2017, pp. 39–57. IEEE (2017)

    Google Scholar 

  14. Dong, Y., et al.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  15. Xiong, L., Han, X., Yang, C.N., et al.: Robust reversible watermarking in encrypted image with secure multiparty based on lightweight cryptography. IEEE Trans. Circ. Syst. Video Technol. 32(1), 75–91(2021)

    Google Scholar 

  16. Zhang, X., Sun, X., Sun, X., et al.: Robust reversible audio watermarking scheme for telemedicine and privacy protection. CMC-Comput. Mater. Continua, 71(2), 3035–3050 (2022)

    Google Scholar 

  17. Yin, Z., Longfei, K.: Robust adaptive steganography based on dither modulation and modification with re-compression. IEEE Trans. Sig. Inf. Process. Over Netw 7, 336–345 (2021)

    Google Scholar 

  18. Ke, L., Yin, Z.: On the security and robustness of “Keyless dynamic optimal multi-bit image steganography using energetic pixels.” Multimedia Tools Appl. 80, 3997–4005 (2021)

    Article  Google Scholar 

  19. Schöttle, P., Schlögl, A., Pasquini, C., et al.: Detecting adversarial examples-a lesson from multimedia security. In: 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018, pp. 947–951 (2018)

    Google Scholar 

  20. Quiring, E., Arp, D., Rieck, K.: Forgotten siblings: unifying attacks on machine learning and digital watermarking. In: 2018 IEEE European symposium on security and privacy (EuroS&P), pp. 488–502. IEEE (2018)

    Google Scholar 

  21. Liu, J., Hou, D., Zhang, W., Yu, N.: Reversible adversarial examples. In: arXiv preprint arXiv:1811.00189 (2018)

  22. Yin, Z., Wang, H., Chen, L., et al.: Reversible adversarial attack based on reversible image transformation. In: arXiv preprint arXiv:1911.02360 (2019)

  23. Chen, L., Zhu, S., Yin, Z.: Reversible Attack based on Local Visual Adversarial Perturbation. In: arXiv preprint arXiv:2110.02700 (2021)

  24. Yin, Z., Chen, L., Lyu, W., et al.: Reversible attack based on adversarial perturbation and reversible data hiding in YUV colorspace. Pattern Recogn. Lett. 166, 1–7 (2023)

    Google Scholar 

  25. Dong, X., Han, J., Chen, D., et al.: Robust superpixel-guided attentional adversarial attack. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12895–12904 (2020)

    Google Scholar 

  26. Hou, Q., Zhou, D., Feng, J.: Coordinate attention for efficient mobile network design. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021)

    Google Scholar 

  27. Jack, K.: Video Demystified: a Handbook for the Digital Engineer. In: Elsevier (2011)

    Google Scholar 

  28. Wenguang, H., Cai, Z.: Reversible data hiding based on dual pairwise prediction-error expansion. IEEE Trans. Image Process. 30, 5045–5055 (2021)

    Article  Google Scholar 

  29. Russakovsky, O., Deng, J., Su, H., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015)

    Google Scholar 

  30. Szegedy, C., Vanhoucke, V., Ioffe, S., et al.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  31. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  32. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: arXiv preprint arXiv:1409.1556 (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wanli Lyu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fang, Y., Jia, J., Yang, Y., Lyu, W. (2023). Improving Transferability Reversible Adversarial Examples Based on Flipping Transformation. In: Yu, Z., et al. Data Science. ICPCSEE 2023. Communications in Computer and Information Science, vol 1879. Springer, Singapore. https://doi.org/10.1007/978-981-99-5968-6_30

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-5968-6_30

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-5967-9

  • Online ISBN: 978-981-99-5968-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics