skip to main content
10.1145/3631908.3631920acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicacsConference Proceedingsconference-collections
research-article

Preprocessing-based Adversarial Defense for Object Detection via Feature Filtration

Published:02 February 2024Publication History

ABSTRACT

Deep Neural Network (DNN)-based object detection achieved great success in a variety of scenarios. However, adversarial examples can cause catastrophic mistakes in DNNs. Despite the adversarial examples with human-imperceptible perturbations can completely change the predictions of the networks in the decision space, few defenses for object detection are known to date. In this paper, we proposed an end-to-end input transformation model to defend adversarial examples, which is motivated by research on feature representations under adversarial attacks. The proposed model consists of an Autoencoder (contains an encoder and a decoder) and a critic network only used in training. Both benign and adversarial examples are used as training sets for the proposed model. The critic network can force the encoder to eliminate the distribution divergence between benign and adversarial examples in the latent space, to filter out the non-robust features and adversarial perturbations. Finally, the decoder is used to reconstruct the input examples from preserved feature vectors into a clean version, which is then fed to the trained detector. Extensive experiments on the challenging PASCAL VOC dataset demonstrate that the proposed method can significantly improve the robustness of various detectors against unseen adversarial attacks, and it has better performance and lower time cost than previous works.

References

  1. Luyl-Da Quach, Khang Nguyen Quoc, Anh Nguyen Quynh, and Hoang Tran Ngoc, "Evaluating the Effectiveness of YOLO Models in Different Sized Object Detection and Feature-Based Classification of Small Objects," Journal of Advances in Information Technology, Vol. 14, No. 5, pp. 907-917, 2023.Google ScholarGoogle ScholarCross RefCross Ref
  2. Srikanth Bethu, M. Neelakantappa, A. Swami Goud, B. Hari Krishna, and P. N. V. Syamala Rao M, "An Approach for Person Detection along with Object Using Machine Learning," Journal of Advances in Information Technology, Vol. 14, No. 3, pp. 411-417, 2023.Google ScholarGoogle ScholarCross RefCross Ref
  3. Adedeji Olugboja, Zenghui Wang, and Yanxia Sun, "Parallel Convolutional Neural Networks for Object Detection," Journal of Advances in Information Technology, Vol. 12, No. 4, pp. 279-286, November 2021. doi: 10.12720/jait.12.4.279-286Google ScholarGoogle ScholarCross RefCross Ref
  4. Cihang Xie, Jianyu Wang, Zhishuai Zhang, Yuyin Zhou, Lingxi Xie, and Alan Yuille. Adversarial examples for semantic segmentation and object detection. In Proceedings of the IEEE international conference on computer vision, pages 1369–1378, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  5. Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno. Physical adversarial examples for object detectors. In 12th USENIX workshop on offensive technologies (WOOT 18), 2018.Google ScholarGoogle Scholar
  6. Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. Ieee Access, 6:14410–14430, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  7. Yuezun Li, Daniel Tian, Ming-Ching Chang, Xiao Bian, and Siwei Lyu. Robust adversarial perturbation on deep proposal-based models. arXiv preprint arXiv:1809.05962, 2018.Google ScholarGoogle Scholar
  8. Chengyue Gong, Tongzheng Ren, Mao Ye, and Qiang Liu. Maxup: Lightweight adversarial training with data augmentation improves neural network training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2474–2483, 2021.Google ScholarGoogle ScholarCross RefCross Ref
  9. Sven Gowal, Chongli Qin, Po-Sen Huang, Taylan Cemgil, Krishnamurthy Dvijotham, Timothy Mann, and Pushmeet Kohli. Achieving robustness in the wild via adversarial mixing with disentangled representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1211–1220, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  10. Chang Xiao, Peilin Zhong, and Changxi Zheng. Enhancing adversarial defense by k-winner take-all. arXiv preprint arXiv:1905.10510, 2019.Google ScholarGoogle Scholar
  11. Yao Qin, Nicholas Frosst, Sara Sabour, Colin Raffel, Garrison Cottrell, and Geoffrey Hinton. Detecting and diagnosing adversarial images with class-conditional capsule reconstructions. arXiv preprint arXiv:1907.02957, 2019.Google ScholarGoogle Scholar
  12. Zhijie Deng, Xiao Yang, Shizhen Xu, Hang Su, and Jun Zhu. Libre: A practical bayesian approach to adversarial detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 972–982, 2021.Google ScholarGoogle ScholarCross RefCross Ref
  13. Shasha Li, Shitong Zhu, Sudipta Paul, Amit Roy-Chowdhury, Chengyu Song, Srikanth Krishnamurthy, Ananthram Swami, and Kevin S Chan. Connecting the dots: Detecting adversarial perturbations using context inconsistency. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII 16, pages 396–413. Springer, 2020.Google ScholarGoogle Scholar
  14. Francesco Croce and Matthias Hein. Provable robustness against all adversarial -perturbations for . arXiv preprint arXiv:1905.11213, 2019.Google ScholarGoogle Scholar
  15. Jinyuan Jia, Xiaoyu Cao, Binghui Wang, and Neil Zhenqiang Gong. Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing. arXiv preprint arXiv:1912.09899, 2019a.Google ScholarGoogle Scholar
  16. Edward Raff, Jared Sylvester, Steven Forsyth, and Mark McLean. Barrage of random transforms for adversarially robust defense. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6528–6537, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  17. Olga Taran, Shideh Rezaeifar, Taras Holotyak, and Slava Voloshynovskiy. Defending against adversarial attacks by randomized diversification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11226–11233, 2019Google ScholarGoogle ScholarCross RefCross Ref
  18. Xiaojun Jia, Xingxing Wei, Xiaochun Cao, and Hassan Foroosh. Comdefend: An efficient image compression model to defend adversarial examples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6084–6092, 2019b.Google ScholarGoogle ScholarCross RefCross Ref
  19. Abdollah Amirkhani and Mohammad Parsa Karimi. Adversarial defenses for object detectors based on gabor convolutional layers. The Visual Computer, 38(6):1929–1944, 2022Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Pin-Chun Chen, Bo-Han Kung, and Jun-Cheng Chen. Class-aware robust adversarial training for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10420–10429, 2021a.Google ScholarGoogle ScholarCross RefCross Ref
  21. Chong Xiang and Prateek Mittal. Detectorguard: Provably securing object detectors against localized patch hiding attacks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pages 3177–3196, 2021Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Ping-yeh Chiang, Michael Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, and Tom Goldstein. Detection as regression: Certified object detection with median smoothing. Advances in Neural Information Processing Systems, 33:1275–1286, 2020.Google ScholarGoogle Scholar
  23. Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, and Soheil Feizi. Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14973–14982, 2022Google ScholarGoogle ScholarCross RefCross Ref
  24. Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, and J Zico Kolter. Denoised smoothing: A provable defense for pretrained classifiers. Advances in Neural Information Processing Systems, 33:21945–21957, 2020Google ScholarGoogle Scholar
  25. Xingxing Wei, Siyuan Liang, Ning Chen, and Xiaochun Cao. Transferable adversarial attacks for image and video object detection. arXiv preprint arXiv:1811.12641, 2018.Google ScholarGoogle Scholar
  26. Yuezun Li, Daniel Tian, Ming-Ching Chang, Xiao Bian, and Siwei Lyu. Robust adversarial perturbation on deep proposal-based models. arXiv preprint arXiv:1809.05962, 2018.Google ScholarGoogle Scholar
  27. Ka-Ho Chow, Ling Liu, Margaret Loper, Juhyun Bae, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, and Yanzhao Wu. Adversarial objectness gradient attacks in real-time object detection systems. In 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), pages 263–272. IEEE, 2020.Google ScholarGoogle Scholar
  28. Jiayu Bao. Sparse adversarial attack to object detection. arXiv preprint arXiv:2012.13692, 2020.Google ScholarGoogle Scholar
  29. Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, and Yiran Chen. Dpatch: An adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299, 2018.Google ScholarGoogle Scholar
  30. Simen Thys, Wiebe Van Ranst, and Toon Goedemé. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0–0, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  31. Darren Yu Yang, Jay Xiong, Xincheng Li, Xu Yan, John Raiti, Yuntao Wang, HuaQiang Wu, and Zhenyu Zhong. Building towards” invisible cloak”: Robust physical adversarial attack on yolo object detector. In 2018 9th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), pages 368–374. IEEE, 2018.Google ScholarGoogle Scholar
  32. Yusheng Zhao, Huanqian Yan, and Xingxing Wei. Object hider: Adversarial patch attack against object detectors. arXiv preprint arXiv:2010.14974, 2020.Google ScholarGoogle Scholar
  33. Zhaoyu Chen, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, and Wenqiang Zhang. Towards practical certifiable patch defense with vision transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15148–15158, 2022.Google ScholarGoogle ScholarCross RefCross Ref
  34. Haichao Zhang and Jianyu Wang. Towards adversarially robust object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 421–430, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  35. Xiangning Chen, Cihang Xie, Mingxing Tan, Li Zhang, Cho-Jui Hsieh, and Boqing Gong. Robust and accurate object detection via adversarial learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16622–16631, 2021b.Google ScholarGoogle ScholarCross RefCross Ref
  36. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019.Google ScholarGoogle Scholar
  37. Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. International journal of computer vision, 111:98–136, 2015.Google ScholarGoogle Scholar
  38. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. Advances in neural information processing systems, 30, 2017.Google ScholarGoogle Scholar
  39. Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, pages 1398–1402. Ieee, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  40. Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on compute vision, pages 1440–1448, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.Google ScholarGoogle Scholar
  42. Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. International journal of computer vision, 111:98–136, 2015.Google ScholarGoogle Scholar
  43. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision (ECCV), pages 740– 755, 2014.Google ScholarGoogle Scholar
  44. Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.Google ScholarGoogle Scholar
  45. Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017.Google ScholarGoogle Scholar

Index Terms

  1. Preprocessing-based Adversarial Defense for Object Detection via Feature Filtration

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        ICACS '23: Proceedings of the 7th International Conference on Algorithms, Computing and Systems
        October 2023
        185 pages
        ISBN:9798400709098
        DOI:10.1145/3631908

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 2 February 2024

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited
      • Article Metrics

        • Downloads (Last 12 months)25
        • Downloads (Last 6 weeks)18

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format