ABSTRACT
Deep Neural Network (DNN)-based object detection achieved great success in a variety of scenarios. However, adversarial examples can cause catastrophic mistakes in DNNs. Despite the adversarial examples with human-imperceptible perturbations can completely change the predictions of the networks in the decision space, few defenses for object detection are known to date. In this paper, we proposed an end-to-end input transformation model to defend adversarial examples, which is motivated by research on feature representations under adversarial attacks. The proposed model consists of an Autoencoder (contains an encoder and a decoder) and a critic network only used in training. Both benign and adversarial examples are used as training sets for the proposed model. The critic network can force the encoder to eliminate the distribution divergence between benign and adversarial examples in the latent space, to filter out the non-robust features and adversarial perturbations. Finally, the decoder is used to reconstruct the input examples from preserved feature vectors into a clean version, which is then fed to the trained detector. Extensive experiments on the challenging PASCAL VOC dataset demonstrate that the proposed method can significantly improve the robustness of various detectors against unseen adversarial attacks, and it has better performance and lower time cost than previous works.
- Luyl-Da Quach, Khang Nguyen Quoc, Anh Nguyen Quynh, and Hoang Tran Ngoc, "Evaluating the Effectiveness of YOLO Models in Different Sized Object Detection and Feature-Based Classification of Small Objects," Journal of Advances in Information Technology, Vol. 14, No. 5, pp. 907-917, 2023.Google ScholarCross Ref
- Srikanth Bethu, M. Neelakantappa, A. Swami Goud, B. Hari Krishna, and P. N. V. Syamala Rao M, "An Approach for Person Detection along with Object Using Machine Learning," Journal of Advances in Information Technology, Vol. 14, No. 3, pp. 411-417, 2023.Google ScholarCross Ref
- Adedeji Olugboja, Zenghui Wang, and Yanxia Sun, "Parallel Convolutional Neural Networks for Object Detection," Journal of Advances in Information Technology, Vol. 12, No. 4, pp. 279-286, November 2021. doi: 10.12720/jait.12.4.279-286Google ScholarCross Ref
- Cihang Xie, Jianyu Wang, Zhishuai Zhang, Yuyin Zhou, Lingxi Xie, and Alan Yuille. Adversarial examples for semantic segmentation and object detection. In Proceedings of the IEEE international conference on computer vision, pages 1369–1378, 2017.Google ScholarCross Ref
- Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno. Physical adversarial examples for object detectors. In 12th USENIX workshop on offensive technologies (WOOT 18), 2018.Google Scholar
- Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. Ieee Access, 6:14410–14430, 2018.Google ScholarCross Ref
- Yuezun Li, Daniel Tian, Ming-Ching Chang, Xiao Bian, and Siwei Lyu. Robust adversarial perturbation on deep proposal-based models. arXiv preprint arXiv:1809.05962, 2018.Google Scholar
- Chengyue Gong, Tongzheng Ren, Mao Ye, and Qiang Liu. Maxup: Lightweight adversarial training with data augmentation improves neural network training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2474–2483, 2021.Google ScholarCross Ref
- Sven Gowal, Chongli Qin, Po-Sen Huang, Taylan Cemgil, Krishnamurthy Dvijotham, Timothy Mann, and Pushmeet Kohli. Achieving robustness in the wild via adversarial mixing with disentangled representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1211–1220, 2020.Google ScholarCross Ref
- Chang Xiao, Peilin Zhong, and Changxi Zheng. Enhancing adversarial defense by k-winner take-all. arXiv preprint arXiv:1905.10510, 2019.Google Scholar
- Yao Qin, Nicholas Frosst, Sara Sabour, Colin Raffel, Garrison Cottrell, and Geoffrey Hinton. Detecting and diagnosing adversarial images with class-conditional capsule reconstructions. arXiv preprint arXiv:1907.02957, 2019.Google Scholar
- Zhijie Deng, Xiao Yang, Shizhen Xu, Hang Su, and Jun Zhu. Libre: A practical bayesian approach to adversarial detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 972–982, 2021.Google ScholarCross Ref
- Shasha Li, Shitong Zhu, Sudipta Paul, Amit Roy-Chowdhury, Chengyu Song, Srikanth Krishnamurthy, Ananthram Swami, and Kevin S Chan. Connecting the dots: Detecting adversarial perturbations using context inconsistency. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII 16, pages 396–413. Springer, 2020.Google Scholar
- Francesco Croce and Matthias Hein. Provable robustness against all adversarial -perturbations for . arXiv preprint arXiv:1905.11213, 2019.Google Scholar
- Jinyuan Jia, Xiaoyu Cao, Binghui Wang, and Neil Zhenqiang Gong. Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing. arXiv preprint arXiv:1912.09899, 2019a.Google Scholar
- Edward Raff, Jared Sylvester, Steven Forsyth, and Mark McLean. Barrage of random transforms for adversarially robust defense. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6528–6537, 2019.Google ScholarCross Ref
- Olga Taran, Shideh Rezaeifar, Taras Holotyak, and Slava Voloshynovskiy. Defending against adversarial attacks by randomized diversification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11226–11233, 2019Google ScholarCross Ref
- Xiaojun Jia, Xingxing Wei, Xiaochun Cao, and Hassan Foroosh. Comdefend: An efficient image compression model to defend adversarial examples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6084–6092, 2019b.Google ScholarCross Ref
- Abdollah Amirkhani and Mohammad Parsa Karimi. Adversarial defenses for object detectors based on gabor convolutional layers. The Visual Computer, 38(6):1929–1944, 2022Google ScholarDigital Library
- Pin-Chun Chen, Bo-Han Kung, and Jun-Cheng Chen. Class-aware robust adversarial training for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10420–10429, 2021a.Google ScholarCross Ref
- Chong Xiang and Prateek Mittal. Detectorguard: Provably securing object detectors against localized patch hiding attacks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pages 3177–3196, 2021Google ScholarDigital Library
- Ping-yeh Chiang, Michael Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, and Tom Goldstein. Detection as regression: Certified object detection with median smoothing. Advances in Neural Information Processing Systems, 33:1275–1286, 2020.Google Scholar
- Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, and Soheil Feizi. Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14973–14982, 2022Google ScholarCross Ref
- Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, and J Zico Kolter. Denoised smoothing: A provable defense for pretrained classifiers. Advances in Neural Information Processing Systems, 33:21945–21957, 2020Google Scholar
- Xingxing Wei, Siyuan Liang, Ning Chen, and Xiaochun Cao. Transferable adversarial attacks for image and video object detection. arXiv preprint arXiv:1811.12641, 2018.Google Scholar
- Yuezun Li, Daniel Tian, Ming-Ching Chang, Xiao Bian, and Siwei Lyu. Robust adversarial perturbation on deep proposal-based models. arXiv preprint arXiv:1809.05962, 2018.Google Scholar
- Ka-Ho Chow, Ling Liu, Margaret Loper, Juhyun Bae, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, and Yanzhao Wu. Adversarial objectness gradient attacks in real-time object detection systems. In 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), pages 263–272. IEEE, 2020.Google Scholar
- Jiayu Bao. Sparse adversarial attack to object detection. arXiv preprint arXiv:2012.13692, 2020.Google Scholar
- Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, and Yiran Chen. Dpatch: An adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299, 2018.Google Scholar
- Simen Thys, Wiebe Van Ranst, and Toon Goedemé. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0–0, 2019.Google ScholarCross Ref
- Darren Yu Yang, Jay Xiong, Xincheng Li, Xu Yan, John Raiti, Yuntao Wang, HuaQiang Wu, and Zhenyu Zhong. Building towards” invisible cloak”: Robust physical adversarial attack on yolo object detector. In 2018 9th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), pages 368–374. IEEE, 2018.Google Scholar
- Yusheng Zhao, Huanqian Yan, and Xingxing Wei. Object hider: Adversarial patch attack against object detectors. arXiv preprint arXiv:2010.14974, 2020.Google Scholar
- Zhaoyu Chen, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, and Wenqiang Zhang. Towards practical certifiable patch defense with vision transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15148–15158, 2022.Google ScholarCross Ref
- Haichao Zhang and Jianyu Wang. Towards adversarially robust object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 421–430, 2019.Google ScholarCross Ref
- Xiangning Chen, Cihang Xie, Mingxing Tan, Li Zhang, Cho-Jui Hsieh, and Boqing Gong. Robust and accurate object detection via adversarial learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16622–16631, 2021b.Google ScholarCross Ref
- Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019.Google Scholar
- Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. International journal of computer vision, 111:98–136, 2015.Google Scholar
- Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. Advances in neural information processing systems, 30, 2017.Google Scholar
- Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, pages 1398–1402. Ieee, 2003.Google ScholarCross Ref
- Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on compute vision, pages 1440–1448, 2015.Google ScholarDigital Library
- Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.Google Scholar
- Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. International journal of computer vision, 111:98–136, 2015.Google Scholar
- Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision (ECCV), pages 740– 755, 2014.Google Scholar
- Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.Google Scholar
- Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015.Google ScholarDigital Library
- Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017.Google Scholar
Index Terms
- Preprocessing-based Adversarial Defense for Object Detection via Feature Filtration
Recommendations
Defending Physical Adversarial Attack on Object Detection via Adversarial Patch-Feature Energy
MM '22: Proceedings of the 30th ACM International Conference on MultimediaObject detection plays an important role in security-critical systems such as autonomous vehicles but has shown to be vulnerable to adversarial patch attacks. Existing defense methods are restricted to localized noise patches by removing noisy regions ...
Adversarially-Aware Robust Object Detector
Computer Vision – ECCV 2022AbstractObject detection, as a fundamental computer vision task, has achieved a remarkable progress with the emergence of deep neural networks. Nevertheless, few works explore the adversarial robustness of object detectors to resist adversarial attacks ...
Detection defense against adversarial attacks with saliency map
AbstractIt is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision and can cause the deep models misbehave. Such phenomenon may lead to severely inestimable consequences in the safety ...
Comments