skip to main content
10.1145/3579988.3585054acmconferencesArticle/Chapter ViewAbstractPublication PagescodaspyConference Proceedingsconference-collections
research-article

Vulnerability of CNNs against Multi-Patch Attacks

Published:24 April 2023Publication History

ABSTRACT

Convolutional Neural Networks have become an integral part of anomaly detection in Cyber-Physical Systems (CPS). Although highly accurate, the advent of adversarial patches exposed the vulnerability of CNNs, posing a security concern for safety-critical CPS. The current form of patch attacks often involves only a single adversarial patch. Using multiple patches enables the attacker to craft a stronger adversary by utilizing various combinations of the patches and their respective locations. Moreover, mitigating multiple patches is a challenging task in practice due to the nascence of the domain. In this work, we present three novel ways to perform an attack with multiple patches: Split, Mono-Multi, and Poly-Multi attacks. We also propose a search method named 'Boundary Space Search (BSS)' for the placement of patches to enhance the attack's efficacy further, experimenting on EuroSAT, Imagenette, and CIFAR10 datasets for various perturbation levels across diverse model architectures. The results show that the Poly-Multi attack outperforms other multi-patch and single-patch attacks and the best perception stealth to surpass the detection. We also highlight the trade-off between the number of patches and the patch size in a Multi-Patch attack. In the end, we analyze the ability of the Multi-Patch attack to overcome state-of-the-art defenses designed for single patch attacks.

References

  1. Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018. Synthesizing robust adversarial examples. In International conference on machine learning. PMLR, 284--293.Google ScholarGoogle Scholar
  2. Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. 2017. Adversarial patch. arXiv preprint arXiv:1712.09665.Google ScholarGoogle Scholar
  3. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). IEEE, 39--57.Google ScholarGoogle Scholar
  4. Zitao Chen, Pritam Dash, and Karthik Pattabiraman. 2021. Turning your strength against you: detecting and mitigating robust and universal adversarial patch attack. arXiv preprint arXiv:2108.05075.Google ScholarGoogle Scholar
  5. Ping-yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studer, and Tom Goldstein. 2020. Certified defenses for adversarial patches. arXiv preprint arXiv:2003.06693.Google ScholarGoogle Scholar
  6. Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin, and Kazunori Kotani. 2020. Adversarialqr: an adversarial patch in qr code format. In 2020 Joint 9th International Conference on Informatics, Electronics & Vision (ICIEV) and 2020 4th International Conference on Imaging, Vision & Pattern Recognition (icIVPR). IEEE, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  7. Edward Chou, Florian Tramer, and Giancarlo Pellegrino. 2020. Sentinet: detecting localized universal attacks against deep learning systems. In 2020 IEEE Security and Privacy Workshops (SPW). IEEE, 48--54.Google ScholarGoogle ScholarCross RefCross Ref
  8. Richard den Hollander et al. 2020. Adversarial patch camouflage against aerial detection. In Artificial Intelligence and Machine Learning in Defense Applications II. Vol. 11543. International Society for Optics and Photonics, 115430F.Google ScholarGoogle Scholar
  9. Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018. Robust physicalworld attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1625--1634.Google ScholarGoogle Scholar
  10. Qichuan Geng, Zhong Zhou, and Xiaochun Cao. 2018. Survey of recent progress in semantic image segmentation with cnns. Science China Information Sciences, 61, 5, 1--18.Google ScholarGoogle ScholarCross RefCross Ref
  11. Thomas Gittings, Steve Schneider, and John Collomosse. 2019. Robust synthesis of adversarial visual examples using a deep image prior. arXiv preprint arXiv:1907.01996.Google ScholarGoogle Scholar
  12. Thomas Gittings, Steve Schneider, and John Collomosse. 2020. Vax-a-net: training-time defence against adversarial patch attacks. In Proceedings of the Asian Conference on Computer Vision.Google ScholarGoogle Scholar
  13. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.Google ScholarGoogle Scholar
  14. Jamie Hayes. 2018. On visible adversarial perturbations & digital watermarking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 1597--1604.Google ScholarGoogle ScholarCross RefCross Ref
  15. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770--778.Google ScholarGoogle ScholarCross RefCross Ref
  16. Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. 2019. Eurosat: a novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.Google ScholarGoogle ScholarCross RefCross Ref
  17. Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. 2018. Introducing eurosat: a novel dataset and deep learning benchmark for land use and land cover classification. In IGARSS 2018--2018 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 204--207.Google ScholarGoogle ScholarCross RefCross Ref
  18. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, TobiasWeyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.Google ScholarGoogle Scholar
  19. Hao Huang, Yongtao Wang, Zhaoyu Chen, Zhi Tang, Wenqiang Zhang, and Kai-Kuang Ma. 2021. Rpattack: refined patch attack on general object detectors. In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  20. Yuheng Huang and Yuanchun Li. 2021. Zero-shot certified defense against adversarial patches with vision transformers. arXiv preprint arXiv:2111.10481.Google ScholarGoogle Scholar
  21. Danny Karmon, Daniel Zoran, and Yoav Goldberg. 2018. Lavan: localized and visible adversarial noise. In International Conference on Machine Learning. PMLR, 2507--2515.Google ScholarGoogle Scholar
  22. Asifullah Khan, Anabia Sohail, Umme Zahoora, and Aqsa Saeed Qureshi. 2020. A survey of the recent architectures of deep convolutional neural networks. Artificial intelligence review, 53, 8, 5455--5516.Google ScholarGoogle Scholar
  23. Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images.Google ScholarGoogle Scholar
  24. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.Google ScholarGoogle Scholar
  25. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533.Google ScholarGoogle Scholar
  26. Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, DanielHsu, and Suman Jana. 2019. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 656--672.Google ScholarGoogle ScholarCross RefCross Ref
  27. Beibei Li, Yuhao Wu, Jiarui Song, Rongxing Lu, Tao Li, and Liang Zhao. 2020. Deepfed: federated deep learning for intrusion detection in industrial cyber-- physical systems. IEEE Transactions on Industrial Informatics, 17, 8, 5615--5624.Google ScholarGoogle ScholarCross RefCross Ref
  28. Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. 2019. Perceptual-sensitive gan for generating adversarial patches. In Proceedings of the AAAI conference on artificial intelligence number 01. Vol. 33, 1028--1035.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, and Yiran Chen. 2018. Dpatch: an adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299.Google ScholarGoogle Scholar
  30. Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770.Google ScholarGoogle Scholar
  31. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.Google ScholarGoogle Scholar
  32. Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, and David Wagner. 2020. Minority reports defense: defending against adversarial patches. In International Conference on Applied Cryptography and Network Security. Springer, 564--582.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Muzammal Naseer, Salman Khan, and Fatih Porikli. 2019. Local gradients smoothing: defense against localized adversarial attacks. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1300--1307.Google ScholarGoogle ScholarCross RefCross Ref
  34. Federico Nesti, Giulio Rossolini, Saasha Nair, Alessandro Biondi, and Giorgio Buttazzo. 2022. Evaluating the robustness of semantic segmentation for autonomous driving against real-world adversarial patch attacks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2280--2289.Google ScholarGoogle ScholarCross RefCross Ref
  35. Niall O'Mahony, Sean Campbell, Anderson Carvalho, Suman Harapanahalli, Gustavo Velasco Hernandez, Lenka Krpalkova, Daniel Riordan, and Joseph Walsh. 2019. Deep learning vs. traditional computer vision. In Science and information conference. Springer, 128--144.Google ScholarGoogle Scholar
  36. Anurag Ranjan, Joel Janai, Andreas Geiger, and Michael J Black. 2019. Attacking optical flow. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2404--2413.Google ScholarGoogle ScholarCross RefCross Ref
  37. Sukrut Rao, David Stutz, and Bernt Schiele. 2020. Adversarial training against location-optimized adversarial patches. In European Conference on Computer Vision. Springer, 429--448.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Waseem Rawat and Zenghui Wang. 2017. Deep convolutional neural networks for image classification: a comprehensive review. Neural computation, 29, 9, 2352--2449.Google ScholarGoogle Scholar
  39. Olga Russakovsky et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115, 3, 211--252.Google ScholarGoogle Scholar
  40. Hadi Salman, Saachi Jain, Eric Wong, and Aleksander Mdry. 2021. Certified patch robustness via smoothed vision transformers. arXiv preprint arXiv:2110.07719.Google ScholarGoogle Scholar
  41. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618--626.Google ScholarGoogle ScholarCross RefCross Ref
  42. Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. 2016. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac conference on computer and communications security, 1528--1540.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Abhijith Sharma, Yijun Bian, Phil Munz, and Apurva Narayan. 2022. Adversarial patch attacks and defences in vision-based tasks: a survey. arXiv preprint arXiv:2206.08304.Google ScholarGoogle Scholar
  44. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.Google ScholarGoogle Scholar
  45. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.Google ScholarGoogle Scholar
  46. Simen Thys, Wiebe Van Ranst, and Toon Goedemé. 2019. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 0--0.Google ScholarGoogle ScholarCross RefCross Ref
  47. Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, and Eftychios Protopapadakis. 2018. Deep learning for computer vision: a brief review. Computational intelligence and neuroscience, 2018.Google ScholarGoogle Scholar
  48. Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, and Prateek Mittal. 2021. Patchguard: a provably robust defense against adversarial patches via small receptive fields and masking. In USENIX Security Symposium, 2237--2254.Google ScholarGoogle Scholar
  49. Chong Xiang, Saeed Mahloujifar, and Prateek Mittal. 2022. {Patchcleanser}: certifiably robust defense against adversarial patches for any image classifier. In 31st USENIX Security Symposium (USENIX Security 22), 2065--2082.Google ScholarGoogle Scholar
  50. Chong Xiang and PrateekMittal. 2021. Detectorguard: provably securing object detectors against localized patch hiding attacks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 3177--3196.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Chong Xiang and Prateek Mittal. 2021. Patchguard: efficient provable attack detection against adversarial patches. arXiv preprint arXiv:2104.12609.Google ScholarGoogle Scholar
  52. Koichiro Yamanaka, Ryutaroh Matsumoto, Keita Takahashi, and Toshiaki Fujii. 2020. Adversarial patch attacks on monocular depth estimation networks. IEEE Access, 8, 179094--179104.Google ScholarGoogle ScholarCross RefCross Ref
  53. Jun Zhang, Lei Pan, Qing-Long Han, Chao Chen, Sheng Wen, and Yang Xiang. 2021. Deep learning based attack detection for cyber-physical system cybersecurity: a survey. IEEE/CAA Journal of Automatica Sinica, 9, 3, 377--391.Google ScholarGoogle ScholarCross RefCross Ref
  54. Chaoqiang Zhao, Qiyu Sun, Chongzhen Zhang, Yang Tang, and Feng Qian. 2020. Monocular depth estimation based on deep learning: an overview. Science China Technological Sciences, 63, 9, 1612--1627.Google ScholarGoogle ScholarCross RefCross Ref
  55. Guoping Zhao, Mingyu Zhang, Jiajun Liu, Yaxian Li, and Ji-Rong Wen. 2020. Ap-gan: adversarial patch attack on content-based image retrieval systems. GeoInformatica, 1--31.Google ScholarGoogle Scholar
  56. Wang Zhiqiang and Liu Jun. 2017. A review of object detection based on convolutional neural network. In 2017 36th Chinese control conference (CCC). IEEE, 11104--11109.Google ScholarGoogle Scholar
  57. Junhao Zhou, Hong-Ning Dai, and Hao Wang. 2019. Lightweight convolution neural networks for mobile edge computing in transportation cyber physical systems. ACM Transactions on Intelligent Systems and Technology (TIST), 10, 6, 1--20.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Xingyu Zhou, Zhisong Pan, Yexin Duan, Jin Zhang, and Shuaihui Wang. 2021. A data independent approach to generate adversarial patches. Machine Vision and Applications, 32, 3, 1--9.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Vulnerability of CNNs against Multi-Patch Attacks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SaT-CPS '23: Proceedings of the 2023 ACM Workshop on Secure and Trustworthy Cyber-Physical Systems
        April 2023
        48 pages
        ISBN:9798400701009
        DOI:10.1145/3579988

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 24 April 2023

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Upcoming Conference

        CODASPY '24
      • Article Metrics

        • Downloads (Last 12 months)111
        • Downloads (Last 6 weeks)3

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader