Skip to main content

A Game Theoretical Vulnerability Analysis of Adversarial Attack

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13599))

Included in the following conference series:

Abstract

In recent times deep learning has been widely used for automating various security tasks in Cyber Domains. However, adversaries manipulate data in many situations and diminish the deployed deep learning model’s accuracy. One notable example is fooling CAPTCHA data to access the CAPTCHA-based Classifier leading to the critical system being vulnerable to cybersecurity attacks. To alleviate this, we propose a computational framework of game theory to analyze the CAPTCHA-based Classifier’s vulnerability, strategy, and outcomes by forming a simultaneous two-player game. We apply the Fast Gradient Symbol Method (FGSM) and One Pixel Attack on CAPTCHA Data to imitate real-life scenarios of possible cyber-attack. Subsequently, to interpret this scenario from a Game theoretical perspective, we represent the interaction in the Stackelberg Game in Kuhn tree to study players’ possible behaviors and actions by applying our Classifier’s actual predicted values. Thus, we interpret potential attacks in deep learning applications while representing viable defense strategies in the game theory prospect.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Szegedy, C., et al.: Intriguing properties of neural networks, arXiv preprint arXiv:1312.6199 (2013)

  2. Kurakin, A., Goodfellow, I., Bengio, S., et al.: Adversarial examples in the physical world (2016)

    Google Scholar 

  3. Meng, D., Chen, H.: Magnet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147 (2017)

    Google Scholar 

  4. Fudenberg, D., Tirole, J.: Game Theory. MIT Press. Cambridge, p. 86 (1991)

    Google Scholar 

  5. Zhang, X., Xiong, Y.: Impulse noise removal using directional difference based noise detector and adaptive weighted mean filter. IEEE Sig. Process. Lett. 16(4), 295–298 (2009)

    Article  Google Scholar 

  6. Myerson, R.B.: Game Theory. Harvard University Press, Cambridge (2013)

    Google Scholar 

  7. Camerer, C.F., Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press, Princeton (2011)

    Google Scholar 

  8. Nowé, A., Vrancx, P., De Hauwere, Y.-M.: Game theory and multi-agent reinforcement learning. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning, Learning, and Optimization, vol. 12, pp. 441–470, Springer, Berlin (2012)

    Google Scholar 

  9. I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," arXiv preprint arXiv:1412.6572, 2014

  10. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks, arXiv preprint arXiv:1706.06083 (2017)

  11. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Article  Google Scholar 

  12. Wang, Z.-H.: Recognition of text-based captcha with merged characters. In: DEStech Transactions on Computer Science and Engineering, no. CECE (2017)

    Google Scholar 

  13. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014)

  14. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  15. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

    Google Scholar 

Download references

Acknowledgement

Portions of this material is based upon work supported by the Office of the Under Secretary of Defense for Research and Engineering under award number FA9550-21-1-0207.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Khondker Fariha Hossain .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hossain, K.F., Tavakkoli, A., Sengupta, S. (2022). A Game Theoretical Vulnerability Analysis of Adversarial Attack. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2022. Lecture Notes in Computer Science, vol 13599. Springer, Cham. https://doi.org/10.1007/978-3-031-20716-7_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20716-7_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20715-0

  • Online ISBN: 978-3-031-20716-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics