Skip to main content

A Survey of Adversarial Examples and Deep Learning Based Data Hiding

  • Conference paper
  • First Online:
Security and Privacy in Social Networks and Big Data (SocialSec 2021)

Abstract

Nowadays, the emergence of deep learning technology has brought breakthroughs to many fields and has been widely used in many practical scenarios. At the same time, the concept of adversarial examples are gradually known. By adding tiny disturbances to the original samples, the accuracy of the original classification depth model is successfully reduced, and the purpose of confronting deep learning is achieved. In this paper, the survey of adversarial examples and deep learning based data hiding are presented, and then the idea and possibility of combining them well are then puts forward, for providing a novel concept of data hiding based adversarial examples. In addition, this paper introduces the generation of adversarial examples and defense against them. The future research is prospected by using watermark to generate adversarial examples. Making use of the imperceptibility of data hiding, we present a novel concept of adversarial examples adding meaningful watermarks to the original image and attacking the deep neural network model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Goodfellow, I.J., Shlens, J., Szegedy, C: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR, San Diego, CA, USA (2015)

    Google Scholar 

  2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy 2017, pp. 39–57. IEEE, San Jose (2017)

    Google Scholar 

  3. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582. IEEE, Las Vegas (2016)

    Google Scholar 

  4. Szegedy, C., et al.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR, Banff, AB, Canada (2014)

    Google Scholar 

  5. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR, San Diego, CA, USA (2015)

    Google Scholar 

  6. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy 2016, pp. 582–597. IEEE, San Diego (2016)

    Google Scholar 

  7. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy 2017, pp. 39–57. IEEE, San Diego (2017)

    Google Scholar 

  8. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6(6), 14410–14430 (2018)

    Article  Google Scholar 

  9. Sankaranarayanan, S., Jain, A., Chellappa, R., Lim, S. N.: Regularizing deep networks using efficient layer wise adversarial train. In: Thirty-Second AAAI Conference on Artificial Intelligence, AAAI, New Orleans, Louisiana, USA (2018)

    Google Scholar 

  10. Moosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 86–94. IEEE, Honolulu (2017)

    Google Scholar 

  11. Dziugaite, G.K., Ghahramani, Z., Roy, D.M.: A study of the effect of JPG compression on adversarial images. CoRR, abs/1608.00853 (2016)

    Google Scholar 

  12. Luo, Y., Boix, X., Roig, G., Poggio, T., Zhao, Q.: Foveation-based mechanisms alleviate adversarial examples. CoRR, abs/1511.06292 (2015)

    Google Scholar 

  13. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: IEEE International Conference on Computer Vision, pp. 1378–1387. IEEE, Venice (2017)

    Google Scholar 

  14. Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. In: 3rd International Conference on Learning Representations, ICLR, San Diego, CA, USA (2015)

    Google Scholar 

  15. Ross, A.S., Doshi-Velez, F.: Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pp. 1660–1669. AAAI, New Orleans (2018)

    Google Scholar 

  16. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. CoRR abs/1503.02531 (2015)

    Google Scholar 

  17. Zhang, J., Xu, Q.: Attention-aware heterogeneous graph neural network. Big Data Min. Anal. 4(4), 233–241 (2021)

    Article  Google Scholar 

  18. Bie, Y., Yang, Y.: A multitask multiview neural network for end-to-end aspect-based sentiment analysis. Big Data Min. Anal. 4(3), 195–207 (2021)

    Article  Google Scholar 

  19. Baluja, S.: Hiding images in plain sight: deep steganography. In: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, pp. 2069–2079 (2017)

    Google Scholar 

  20. Baluja, S.: Hiding images within images. IEEE Trans. Pattern Anal. Mach. Intell. 42(7), 1685–1697 (2020)

    Article  Google Scholar 

  21. Weng, X., Li, Y., Chi, L., Mu, Y.: High-capacity convolutional video steganography with temporal residual modeling. In: Proceedings of the 2019 on International Conference on Multimedia Retrieval, pp. 87–95. ACM, Ottawa (2019)

    Google Scholar 

  22. Niu, K., Yang, X., Zhang, Y.: A novel video reversible data hiding algorithm using motion vector for H.264/AVC. Tsinghua Sci. Technol. 22(5), 489–498 (2017)

    Article  Google Scholar 

  23. Wang, W., et al.: Anomaly detection of industrial control systems based on transfer learning. Tsinghua Sci. Technol. 26(6), 821–832 (2021)

    Article  Google Scholar 

  24. Quiring, E., Arp, D., Rieck, K.: Forgotten siblings: unifying attacks on machine learning and digital watermarking. In: 2018 IEEE European Symposium on Security and Privacy, pp. 488–502. IEEE, London (2019)

    Google Scholar 

  25. Schöttle, P., Schlögl, A., Pasquini, C., Böhme, R.: Detecting adversarial examples-a lesson from multimedia forensics. CoRR, abs/1803.03613 (2018)

    Google Scholar 

  26. Quiring, E., Rieck, K.: Adversarial machine learning against digital watermarking. In: 26th European Signal Processing Conference, pp. 519–523. IEEE, Roma (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaolong Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Feng, Z., Liu, C., Ji, X., Liu, X. (2021). A Survey of Adversarial Examples and Deep Learning Based Data Hiding. In: Lin, L., Liu, Y., Lee, CW. (eds) Security and Privacy in Social Networks and Big Data. SocialSec 2021. Communications in Computer and Information Science, vol 1495. Springer, Singapore. https://doi.org/10.1007/978-981-16-7913-1_12

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-7913-1_12

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-7912-4

  • Online ISBN: 978-981-16-7913-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics