ABSTRACT
Convolutional Neural Networks have become an integral part of anomaly detection in Cyber-Physical Systems (CPS). Although highly accurate, the advent of adversarial patches exposed the vulnerability of CNNs, posing a security concern for safety-critical CPS. The current form of patch attacks often involves only a single adversarial patch. Using multiple patches enables the attacker to craft a stronger adversary by utilizing various combinations of the patches and their respective locations. Moreover, mitigating multiple patches is a challenging task in practice due to the nascence of the domain. In this work, we present three novel ways to perform an attack with multiple patches: Split, Mono-Multi, and Poly-Multi attacks. We also propose a search method named 'Boundary Space Search (BSS)' for the placement of patches to enhance the attack's efficacy further, experimenting on EuroSAT, Imagenette, and CIFAR10 datasets for various perturbation levels across diverse model architectures. The results show that the Poly-Multi attack outperforms other multi-patch and single-patch attacks and the best perception stealth to surpass the detection. We also highlight the trade-off between the number of patches and the patch size in a Multi-Patch attack. In the end, we analyze the ability of the Multi-Patch attack to overcome state-of-the-art defenses designed for single patch attacks.
- Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018. Synthesizing robust adversarial examples. In International conference on machine learning. PMLR, 284--293.Google Scholar
- Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. 2017. Adversarial patch. arXiv preprint arXiv:1712.09665.Google Scholar
- Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). IEEE, 39--57.Google Scholar
- Zitao Chen, Pritam Dash, and Karthik Pattabiraman. 2021. Turning your strength against you: detecting and mitigating robust and universal adversarial patch attack. arXiv preprint arXiv:2108.05075.Google Scholar
- Ping-yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studer, and Tom Goldstein. 2020. Certified defenses for adversarial patches. arXiv preprint arXiv:2003.06693.Google Scholar
- Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin, and Kazunori Kotani. 2020. Adversarialqr: an adversarial patch in qr code format. In 2020 Joint 9th International Conference on Informatics, Electronics & Vision (ICIEV) and 2020 4th International Conference on Imaging, Vision & Pattern Recognition (icIVPR). IEEE, 1--6.Google ScholarCross Ref
- Edward Chou, Florian Tramer, and Giancarlo Pellegrino. 2020. Sentinet: detecting localized universal attacks against deep learning systems. In 2020 IEEE Security and Privacy Workshops (SPW). IEEE, 48--54.Google ScholarCross Ref
- Richard den Hollander et al. 2020. Adversarial patch camouflage against aerial detection. In Artificial Intelligence and Machine Learning in Defense Applications II. Vol. 11543. International Society for Optics and Photonics, 115430F.Google Scholar
- Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018. Robust physicalworld attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1625--1634.Google Scholar
- Qichuan Geng, Zhong Zhou, and Xiaochun Cao. 2018. Survey of recent progress in semantic image segmentation with cnns. Science China Information Sciences, 61, 5, 1--18.Google ScholarCross Ref
- Thomas Gittings, Steve Schneider, and John Collomosse. 2019. Robust synthesis of adversarial visual examples using a deep image prior. arXiv preprint arXiv:1907.01996.Google Scholar
- Thomas Gittings, Steve Schneider, and John Collomosse. 2020. Vax-a-net: training-time defence against adversarial patch attacks. In Proceedings of the Asian Conference on Computer Vision.Google Scholar
- Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.Google Scholar
- Jamie Hayes. 2018. On visible adversarial perturbations & digital watermarking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 1597--1604.Google ScholarCross Ref
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770--778.Google ScholarCross Ref
- Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. 2019. Eurosat: a novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.Google ScholarCross Ref
- Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. 2018. Introducing eurosat: a novel dataset and deep learning benchmark for land use and land cover classification. In IGARSS 2018--2018 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 204--207.Google ScholarCross Ref
- Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, TobiasWeyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.Google Scholar
- Hao Huang, Yongtao Wang, Zhaoyu Chen, Zhi Tang, Wenqiang Zhang, and Kai-Kuang Ma. 2021. Rpattack: refined patch attack on general object detectors. In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 1--6.Google ScholarCross Ref
- Yuheng Huang and Yuanchun Li. 2021. Zero-shot certified defense against adversarial patches with vision transformers. arXiv preprint arXiv:2111.10481.Google Scholar
- Danny Karmon, Daniel Zoran, and Yoav Goldberg. 2018. Lavan: localized and visible adversarial noise. In International Conference on Machine Learning. PMLR, 2507--2515.Google Scholar
- Asifullah Khan, Anabia Sohail, Umme Zahoora, and Aqsa Saeed Qureshi. 2020. A survey of the recent architectures of deep convolutional neural networks. Artificial intelligence review, 53, 8, 5455--5516.Google Scholar
- Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images.Google Scholar
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.Google Scholar
- Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533.Google Scholar
- Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, DanielHsu, and Suman Jana. 2019. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 656--672.Google ScholarCross Ref
- Beibei Li, Yuhao Wu, Jiarui Song, Rongxing Lu, Tao Li, and Liang Zhao. 2020. Deepfed: federated deep learning for intrusion detection in industrial cyber-- physical systems. IEEE Transactions on Industrial Informatics, 17, 8, 5615--5624.Google ScholarCross Ref
- Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. 2019. Perceptual-sensitive gan for generating adversarial patches. In Proceedings of the AAAI conference on artificial intelligence number 01. Vol. 33, 1028--1035.Google ScholarDigital Library
- Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, and Yiran Chen. 2018. Dpatch: an adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299.Google Scholar
- Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770.Google Scholar
- Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.Google Scholar
- Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, and David Wagner. 2020. Minority reports defense: defending against adversarial patches. In International Conference on Applied Cryptography and Network Security. Springer, 564--582.Google ScholarDigital Library
- Muzammal Naseer, Salman Khan, and Fatih Porikli. 2019. Local gradients smoothing: defense against localized adversarial attacks. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1300--1307.Google ScholarCross Ref
- Federico Nesti, Giulio Rossolini, Saasha Nair, Alessandro Biondi, and Giorgio Buttazzo. 2022. Evaluating the robustness of semantic segmentation for autonomous driving against real-world adversarial patch attacks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2280--2289.Google ScholarCross Ref
- Niall O'Mahony, Sean Campbell, Anderson Carvalho, Suman Harapanahalli, Gustavo Velasco Hernandez, Lenka Krpalkova, Daniel Riordan, and Joseph Walsh. 2019. Deep learning vs. traditional computer vision. In Science and information conference. Springer, 128--144.Google Scholar
- Anurag Ranjan, Joel Janai, Andreas Geiger, and Michael J Black. 2019. Attacking optical flow. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2404--2413.Google ScholarCross Ref
- Sukrut Rao, David Stutz, and Bernt Schiele. 2020. Adversarial training against location-optimized adversarial patches. In European Conference on Computer Vision. Springer, 429--448.Google ScholarDigital Library
- Waseem Rawat and Zenghui Wang. 2017. Deep convolutional neural networks for image classification: a comprehensive review. Neural computation, 29, 9, 2352--2449.Google Scholar
- Olga Russakovsky et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115, 3, 211--252.Google Scholar
- Hadi Salman, Saachi Jain, Eric Wong, and Aleksander Mdry. 2021. Certified patch robustness via smoothed vision transformers. arXiv preprint arXiv:2110.07719.Google Scholar
- Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618--626.Google ScholarCross Ref
- Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. 2016. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac conference on computer and communications security, 1528--1540.Google ScholarDigital Library
- Abhijith Sharma, Yijun Bian, Phil Munz, and Apurva Narayan. 2022. Adversarial patch attacks and defences in vision-based tasks: a survey. arXiv preprint arXiv:2206.08304.Google Scholar
- Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.Google Scholar
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.Google Scholar
- Simen Thys, Wiebe Van Ranst, and Toon Goedemé. 2019. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 0--0.Google ScholarCross Ref
- Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, and Eftychios Protopapadakis. 2018. Deep learning for computer vision: a brief review. Computational intelligence and neuroscience, 2018.Google Scholar
- Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, and Prateek Mittal. 2021. Patchguard: a provably robust defense against adversarial patches via small receptive fields and masking. In USENIX Security Symposium, 2237--2254.Google Scholar
- Chong Xiang, Saeed Mahloujifar, and Prateek Mittal. 2022. {Patchcleanser}: certifiably robust defense against adversarial patches for any image classifier. In 31st USENIX Security Symposium (USENIX Security 22), 2065--2082.Google Scholar
- Chong Xiang and PrateekMittal. 2021. Detectorguard: provably securing object detectors against localized patch hiding attacks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 3177--3196.Google ScholarDigital Library
- Chong Xiang and Prateek Mittal. 2021. Patchguard: efficient provable attack detection against adversarial patches. arXiv preprint arXiv:2104.12609.Google Scholar
- Koichiro Yamanaka, Ryutaroh Matsumoto, Keita Takahashi, and Toshiaki Fujii. 2020. Adversarial patch attacks on monocular depth estimation networks. IEEE Access, 8, 179094--179104.Google ScholarCross Ref
- Jun Zhang, Lei Pan, Qing-Long Han, Chao Chen, Sheng Wen, and Yang Xiang. 2021. Deep learning based attack detection for cyber-physical system cybersecurity: a survey. IEEE/CAA Journal of Automatica Sinica, 9, 3, 377--391.Google ScholarCross Ref
- Chaoqiang Zhao, Qiyu Sun, Chongzhen Zhang, Yang Tang, and Feng Qian. 2020. Monocular depth estimation based on deep learning: an overview. Science China Technological Sciences, 63, 9, 1612--1627.Google ScholarCross Ref
- Guoping Zhao, Mingyu Zhang, Jiajun Liu, Yaxian Li, and Ji-Rong Wen. 2020. Ap-gan: adversarial patch attack on content-based image retrieval systems. GeoInformatica, 1--31.Google Scholar
- Wang Zhiqiang and Liu Jun. 2017. A review of object detection based on convolutional neural network. In 2017 36th Chinese control conference (CCC). IEEE, 11104--11109.Google Scholar
- Junhao Zhou, Hong-Ning Dai, and Hao Wang. 2019. Lightweight convolution neural networks for mobile edge computing in transportation cyber physical systems. ACM Transactions on Intelligent Systems and Technology (TIST), 10, 6, 1--20.Google ScholarDigital Library
- Xingyu Zhou, Zhisong Pan, Yexin Duan, Jin Zhang, and Shuaihui Wang. 2021. A data independent approach to generate adversarial patches. Machine Vision and Applications, 32, 3, 1--9.Google ScholarDigital Library
Index Terms
- Vulnerability of CNNs against Multi-Patch Attacks
Recommendations
Detecting Adversarial Patch Attacks through Global-local Consistency
ADVM '21: Proceedings of the 1st International Workshop on Adversarial Learning for MultimediaRecent works have well-demonstrated the threat of adversarial patch attacks to real-world vision media systems. By arbitrarily modifying pixels within a small restricted area in the image, adversarial patches can mislead neural-network-based image ...
Deep k-NN Defense Against Clean-Label Data Poisoning Attacks
Computer Vision – ECCV 2020 WorkshopsAbstractTargeted clean-label data poisoning is a type of adversarial attack on machine learning systems in which an adversary injects a few correctly-labeled, minimally-perturbed samples into the training data, causing a model to misclassify a particular ...
Vulnerability issues in Automatic Speaker Verification (ASV) systems
AbstractClaimed identities of speakers can be verified by means of automatic speaker verification (ASV) systems, also known as voice biometric systems. Focusing on security and robustness against spoofing attacks on ASV systems, and observing that the ...
Comments