Abstract
Object detectors, such as Faster R-CNN and YOLO, have numerous applications, including in some critical systems, e.g., self-driving cars and unmanned aerial vehicles. Their vulnerabilities have to be studied thoroughly before deploying them in critical systems to avoid irrecoverable loss caused by intentional attacks. Researchers have proposed some methods to craft adversarial examples for studying security risk in object detectors. All these methods require modifying pixels inside target objects. Some modifications are substantial and target objects are significantly distorted. In this paper, an algorithm which derives an adversarial signal placing around the border of target objects to fool objector detectors is proposed. Computationally, the algorithm seeks a border around target objects to mislead Faster R-CNN to produce a very large bounding box and finally decease its confidence to target objects. Using stop sign as a target object, adversarial borders with four different sizes are generated and evaluated on 77 videos, including five in-car videos for digital attacks and 72 videos for physical attacks. The experimental results show that adversarial border can effectively fool Faster R-CNN and YOLOv3 digitally and physically. In addition, the experimental results on YOLOv3 indicate that adversarial border is transferable, which is vital for black-box attack.
Keywords
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: ICML (2018)
Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A.V., Criminisi, A.: Measuring neural net robustness with constraints. In: NIPS (2016)
Bhagoji, A.N., Cullina, D., Sitawarin, C., Mittal, P.: Enhancing robustness of machine learning systems via data transformations. In: 2018 52nd Annual Conference on Information Sciences and Systems (CISS), pp. 1–5 (2018)
Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017)
Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: AISec@CCS (2017)
Chen, S.-T., Cornelius, C., Martin, J., Chau, D.H.P.: ShapeShifter: robust physical adversarial attack on faster R-CNN object detector. In: Berlingerio, M., Bonchi, F., Gärtner, T., Hurley, N., Ifrim, G. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11051, pp. 52–68. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-10925-7_4
Das, N., et al.: Keeping the bad guys out: protecting and vaccinating deep learning with jpeg compression. CoRR abs/1705.02900 (2017)
Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1625–1634 (2018)
Fawzi, A., Fawzi, O., Frossard, P.: Fundamental limits on adversarial robustness. In: ICML 2015 (2015)
Fawzi, A., Fawzi, O., Frossard, P.: Analysis of classifiers’ robustness to adversarial perturbations. Mach. Learn. 107, 481–508 (2017)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015). http://arxiv.org/abs/1412.6572
Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: ICLR Workshop (2017). https://arxiv.org/abs/1607.02533
Liu, Y., Chang Liu, X.C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: Proceedings of 5th International Conference on Learning Representations (2017)
Lu, J., Sibai, H., Fabry, E.: Adversarial examples that fool detectors. CoRR abs/1712.02494 (2017)
Lu, J., Sibai, H., Fabry, E., Forsyth, D.A.: No need to worry about adversarial examples in object detection in autonomous vehicles. CoRR abs/1707.03501 (2017)
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582 (2016)
Papernot, N., McDaniel, P.D., Goodfellow, I.J.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples CoRR abs/1605.07277 (2016)
Papernot, N., McDaniel, P.D., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387 (2016)
Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. CoRR abs/1804.02767 (2018)
Rozsa, A., Rudd, E.M., Boult, T.E.: Adversarial diversity and hard positive generation. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 410–417 (2016)
Song, D., et al.: Physical adversarial examples for object detectors. In: 12th USENIX Workshop on Offensive Technologies (WOOT 2018) (2018)
Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014). http://arxiv.org/abs/1312.6199
Tabacof, P., Valle, E.: Exploring the space of adversarial images. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 426–433 (2016)
Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.L.: Adversarial examples for semantic segmentation and object detection. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1378–1387 (2017)
Zheng, S., Song, Y., Leung, T., Goodfellow, I.J.: Improving the robustness of deep neural networks via stability training. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4480–4488 (2016)
Acknowledgement
This work is partially supported by the Ministry of Education, Singapore through Academic Research Fund Tier 1, RG30/17.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Huang, Y., Kong, A.WK., Lam, KY. (2019). Attacking Object Detectors Without Changing the Target Object. In: Nayak, A., Sharma, A. (eds) PRICAI 2019: Trends in Artificial Intelligence. PRICAI 2019. Lecture Notes in Computer Science(), vol 11672. Springer, Cham. https://doi.org/10.1007/978-3-030-29894-4_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-29894-4_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-29893-7
Online ISBN: 978-3-030-29894-4
eBook Packages: Computer ScienceComputer Science (R0)