skip to main content
10.1145/3400302.3415758acmconferencesArticle/Chapter ViewAbstractPublication PagesiccadConference Proceedingsconference-collections
invited-talk

Counteracting adversarial attacks in autonomous driving

Published: 17 December 2020 Publication History

Abstract

In this paper, we focus on studying robust deep stereo vision of autonomous driving systems and counteracting adversarial attacks against it. Autonomous system operation requires real-time processing of measurement data which often contain significant uncertainties and noise. Adversarial attacks have been widely studied to simulate these perturbations in recent years. To counteract these attacks in autonomous systems, a novel defense method is proposed in this paper. A stereo-regularizer is proposed to guide the model to learn the implicit relationship between the left and right images of the stereo-vision system. Univariate and multivariate functions are adopted to characterize the relationships between the two input images and the object detection model. The regularizer is then relaxed to its upper bound to improve adversarial robustness. Furthermore, the upper bound is approximated by the remainder of its Taylor expansion to improve the local smoothness of the loss surface. The model parameters are trained via adversarial training with the novel regularization term. Our method exploits basic knowledge from the physical world, i.e., the mutual constraints of the two images in the stereo-based system. As such, outliers can be detected and defended with high accuracy and efficiency. Numerical experiments demonstrate that the proposed method offers superior performance when compared with traditional adversarial training methods in state-of-the-art stereo-based 3D object detection models for autonomous vehicles.

References

[1]
E. Arnold, O. Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby, and A. Mouzakitis, "A survey on 3d object detection methods for autonomous driving applications," IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 10, pp. 3782--3795, 2019.
[2]
S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," in Advances in neural information processing systems, 2015, pp. 91--99.
[3]
X. Chen, K. Kundu, Y. Zhu, H. Ma, S. Fidler, and R. Urtasun, "3d object proposals using stereo imagery for accurate object class detection," IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 5, pp. 1259--1272, 2017.
[4]
P. Li, T. Qin et al., "Stereo vision-based semantic 3d object and ego-motion tracking for autonomous driving," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 646--661.
[5]
P. Li, X. Chen, and S. Shen, "Stereo r-cnn based 3d object detection for autonomous driving," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7644--7652.
[6]
Y. Chen, S. Liu, X. Shen, and J. Jia, "Dsgn: Deep stereo geometry network for 3d object detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 12 536--12 545.
[7]
C. R. Qi, L. Yi, H. Su, and L. J. Guibas, "Pointnet++: Deep hierarchical feature learning on point sets in a metric space," in Advances in neural information processing systems, 2017, pp. 5099--5108.
[8]
X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, "Multi-view 3d object detection network for autonomous driving," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1907--1915.
[9]
W. Luo, B. Yang, and R. Urtasun, "Fast and furious: Real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 3569--3577.
[10]
S. Shi, X. Wang, and H. Li, "Pointrcnn: 3d object proposal generation and detection from point cloud," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
[11]
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," International Conference on Learning Representations (ICLR), 2014.
[12]
I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," International Conference on Learning Representations (ICLR), 2015.
[13]
S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: a simple and accurate method to fool deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2574--2582.
[14]
F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, "Ensemble adversarial training: Attacks and defenses," arXiv preprint arXiv:1705.07204, 2017.
[15]
C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, "Adversarial examples for semantic segmentation and object detection," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1369--1378.
[16]
A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," arXiv preprint arXiv:1607.02533, 2016.
[17]
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, "Towards deep learning models resistant to adversarial attacks," International Conference on Learning Representations (ICLR), 2018.
[18]
Y. Li, D. Tian, X. Bian, S. Lyu et al., "Robust adversarial perturbation on deep proposal-based models," British Machine Vision Conference (BMVC), 2018.
[19]
Y. Li, X. Bian, M. Chang, and S. Lyu, "Exploring the vulnerability of single shot module in object detectors via imperceptible background patches," in British Machine Vision Conference (BMVC), 2019.
[20]
S.-T. Chen, C. Cornelius, J. Martin, and D. H. P. Chau, "Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector," in Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2018, pp. 52--68.
[21]
X. Dong, J. Han, D. Chen, J. Liu, H. Bian, Z. Ma, H. Li, X. Wang, W. Zhang, and N. Yu, "Robust superpixel-guided attentional adversarial attack," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 12 895--12 904.
[22]
K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, "Robust physical-world attacks on deep learning visual classification," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 1625--1634.
[23]
K. Xu, G. Zhang, S. Liu, Q. Fan, M. Sun, H. Chen, P.-Y. Chen, Y. Wang, and X. Lin, "Adversarial t-shirt! evading person detectors in a physical world," arXiv, pp. arXiv--1910, 2019.
[24]
Z. Wu, S.-N. Lim, L. Davis, and T. Goldstein, "Making an invisibility cloak: Real world adversarial attacks on object detectors," European Conference on Computer Vision (ECCV), 2020.
[25]
A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, "Synthesizing robust adversarial examples," in International conference on machine learning, 2018, pp. 284--293.
[26]
N. Akhtar and A. Mian, "Threat of adversarial attacks on deep learning in computer vision: A survey," IEEE Access, vol. 6, pp. 14 410--14 430, 2018.
[27]
C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, "Mitigating adversarial effects through randomization," International Conference on Learning Representations (ICLR), 2018.
[28]
N. Das, M. Shanbhogue, S.-T. Chen, F. Hohman, L. Chen, M. E. Kounavis, and D. H. Chau, "Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression," arXiv preprint arXiv:1705.02900, 2017.
[29]
J. Lu, T. Issaranon, and D. Forsyth, "Safetynet: Detecting and rejecting adversarial examples robustly," in IEEE International Conference on Computer Vision (ICCV), Oct 2017.
[30]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," in Advances in neural information processing systems, 2014, pp. 2672--2680.
[31]
J. Li, X. Liang, Y. Wei, T. Xu, J. Feng, and S. Yan, "Perceptual generative adversarial networks for small object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1222--1230.
[32]
Y. Sun, C. Cheng, Y. Zhang, C. Zhang, L. Zheng, Z. Wang, and Y. Wei, "Circle loss: A unified perspective of pair similarity optimization," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 6398--6407.
[33]
W. Liu, Y. Wen, Z. Yu, and M. Yang, "Large-margin softmax loss for convolutional neural networks." in ICML, vol. 2, no. 3, 2016, p. 7.
[34]
C. Qin, J. Martens, S. Gowal, D. Krishnan, K. Dvijotham, A. Fawzi, S. De, R. Stanforth, and P. Kohli, "Adversarial robustness through local linearization," in Advances in Neural Information Processing Systems, 2019, pp. 13 847--13 856.
[35]
J. Xu, Y. Li, Y. Bai, Y. Jiang, and S.-T. Xia, "Adversarial defense via local flatness regularization," arXiv preprint arXiv:1910.12165, 2019.
[36]
B. Yu, J. Wu, J. Ma, and Z. Zhu, "Tangent-normal adversarial regularization for semi-supervised learning," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 10 676--10 684.
[37]
S. A. Flores, "Robustness of ℓ1-norm estimation: From folklore to fact," IEEE Signal Processing Letters, vol. 25, no. 11, pp. 1640--1644, 2018.
[38]
A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? the kitti vision benchmark suite," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012, pp. 3354--3361.
[39]
R. Kress, Numerical Analysis, ser. Graduate Texts in Mathematics. Springer New York, 1998. [Online]. Available: https://books.google.com.hk/books?id=e7ZmHRIxum0C

Cited By

View all
  • (2024)Toward explainable artificial intelligence: A survey and overview on their intrinsic propertiesNeurocomputing10.1016/j.neucom.2023.126919563(126919)Online publication date: Jan-2024
  • (2024)Deep learning adversarial attacks and defenses in autonomous vehicles: a systematic literature review from a safety perspectiveArtificial Intelligence Review10.1007/s10462-024-11014-858:1Online publication date: 27-Nov-2024
  • (2023)Secure Gait Recognition-Based Smart Surveillance Systems Against Universal Adversarial AttacksJournal of Database Management10.4018/JDM.31841534:2(1-25)Online publication date: 16-Feb-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICCAD '20: Proceedings of the 39th International Conference on Computer-Aided Design
November 2020
1396 pages
ISBN:9781450380263
DOI:10.1145/3400302
  • General Chair:
  • Yuan Xie
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

  • IEEE CAS
  • IEEE CEDA
  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 December 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial defense
  2. autonomous system
  3. local smoothness
  4. robust stereo vision

Qualifiers

  • Invited-talk

Conference

ICCAD '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 457 of 1,762 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)88
  • Downloads (Last 6 weeks)5
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Toward explainable artificial intelligence: A survey and overview on their intrinsic propertiesNeurocomputing10.1016/j.neucom.2023.126919563(126919)Online publication date: Jan-2024
  • (2024)Deep learning adversarial attacks and defenses in autonomous vehicles: a systematic literature review from a safety perspectiveArtificial Intelligence Review10.1007/s10462-024-11014-858:1Online publication date: 27-Nov-2024
  • (2023)Secure Gait Recognition-Based Smart Surveillance Systems Against Universal Adversarial AttacksJournal of Database Management10.4018/JDM.31841534:2(1-25)Online publication date: 16-Feb-2023
  • (2023)Saliency Attack: Towards Imperceptible Black-box Adversarial AttackACM Transactions on Intelligent Systems and Technology10.1145/358256314:3(1-20)Online publication date: 1-Apr-2023
  • (2023)ADS-Lead: Lifelong Anomaly Detection in Autonomous Driving SystemsIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2021.312290624:1(1039-1051)Online publication date: Jan-2023
  • (2023)Cybersecurity of Autonomous Vehicles: A Systematic Literature Review of Adversarial Attacks and Defense ModelsIEEE Open Journal of Vehicular Technology10.1109/OJVT.2023.32653634(417-437)Online publication date: 2023
  • (2022)Counteracting Adversarial Attacks in Autonomous DrivingIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2022.316611241:12(5193-5206)Online publication date: Dec-2022
  • (2022)Deep H-GCN: Fast Analog IC Aging-Induced Degradation EstimationIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2021.310725041:7(1990-2003)Online publication date: Jul-2022
  • (2022)Adversarial Attacks and Defenses for Deep-Learning-Based Unmanned Aerial VehiclesIEEE Internet of Things Journal10.1109/JIOT.2021.31110249:22(22399-22409)Online publication date: 15-Nov-2022
  • (2021)Adversarial Attacks and Defense Technologies on Autonomous Vehicles: A ReviewApplied Computer Systems10.2478/acss-2021-001226:2(96-106)Online publication date: 30-Dec-2021
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media