skip to main content
10.1145/3372297.3423359acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Phantom of the ADAS: Securing Advanced Driver-Assistance Systems from Split-Second Phantom Attacks

Published: 02 November 2020 Publication History

Abstract

In this paper, we investigate "split-second phantom attacks," a scientific gap that causes two commercial advanced driver-assistance systems (ADASs), Telsa Model X (HW 2.5 and HW 3) and Mobileye 630, to treat a depthless object that appears for a few milliseconds as a real obstacle/object. We discuss the challenge that split-second phantom attacks create for ADASs. We demonstrate how attackers can apply split-second phantom attacks remotely by embedding phantom road signs into an advertisement presented on a digital billboard which causes Tesla's autopilot to suddenly stop the car in the middle of a road and Mobileye 630 to issue false notifications. We also demonstrate how attackers can use a projector in order to cause Tesla's autopilot to apply the brakes in response to a phantom of a pedestrian that was projected on the road and Mobileye 630 to issue false notifications in response to a projected road sign. To counter this threat, we propose a countermeasure which can determine whether a detected object is a phantom or real using just the camera sensor. The countermeasure (GhostBusters) uses a "committee of experts" approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object's light, context, surface, and depth. We demonstrate our countermeasure's effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks.

Supplementary Material

MOV File (Copy of CCS2020_fpe437_Phantom of the ADAS - Nano Zii.mov)
Presentation video

References

[1]
Simon Alvarez. [n.d.]. Tesla's approach for Full Self-Driving gets validated by Cornell researchers, LiDAR pioneer. https://www.teslarati.com/tesla-elon-musk-full-self-driving-lidar-waymo-cornell-study/.
[2]
Anker. 2019. Nebula Capsule. https://www.amazon.com/Projector-Anker-Portable-High-Contrast-Playtime/dp/B076Q3GBJK.
[3]
Alvaro Arcos-Garcia, Juan A. Alvarez-Garcia, and Luis M. Soria-Morillo. 2018. Evaluation of deep neural networks for traffic sign detection systems. Neurocomputing, Vol. 316 (2018), 332 -- 344. https://doi.org/10.1016/j.neucom.2018.08.009
[4]
Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. 2006. Surf: Speeded up robust features. In European conference on computer vision. Springer, 404--417.
[5]
Tom B Brown, Dandelion Mané, Aurko Roy, Mart'in Abadi, and Justin Gilmer. 2017. Adversarial patch. arXiv preprint arXiv:1712.09665 (2017).
[6]
Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. 2020. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11621--11631.
[7]
Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, and Z Morley Mao. 2019. Adversarial sensor attack on lidar-based perception in autonomous driving. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2267--2281.
[8]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). IEEE, 39--57.
[9]
Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Ead: elastic-net attacks to deep neural networks via adversarial examples. arXiv preprint arXiv:1709.04114 (2017).
[10]
Shang-Tse Chen, Cory Cornelius, Jason Martin, and Duen Horng Polo Chau. 2018. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 52--68.
[11]
European Comission. 2016. Advanced driver assistance systems. https://ec.europa.eu/transport/road_safety/sites/roadsafety/files/ersosynthesis2016-adas15_en.pdf.
[12]
NEDA CVIJETIC. 2019. DRIVE Labs: Detecting the Distance -- NVIDIA Blog. https://blogs.nvidia.com/blog/2019/06/19/drive-labs-distance-to-object-detection/. (Accessed on 08/24/2020).
[13]
Jifeng Dai, Yi Li, Kaiming He, and Jian Sun. 2016. R-fcn: Object detection via region-based fully convolutional networks. In Advances in neural information processing systems. 379--387.
[14]
Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Siwei Li, Li Chen, Michael E Kounavis, and Duen Horng Chau. 2018. Shield: Fast, practical defense and vaccination for deep learning using jpeg compression. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 196--204.
[15]
Automated Driving. 2014. Levels of driving automation are defined in new SAE international standard J3016: 2014. SAE International: Warrendale, PA, USA (2014).
[16]
Gamaleldin Elsayed, Dilip Krishnan, Hossein Mobahi, Kevin Regan, and Samy Bengio. 2018. Large margin deep networks for classification. In Advances in neural information processing systems. 842--852.
[17]
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2017. Robust physical-world attacks on deep learning models. arXiv preprint arXiv:1707.08945 (2017).
[18]
Gunnar Farneb"ack. 2003. Two-frame motion estimation based on polynomial expansion. In Scandinavian conference on Image analysis. Springer, 363--370.
[19]
David Geronimo, Antonio M Lopez, Angel D Sappa, and Thorsten Graf. 2009. Survey of pedestrian detection for advanced driver assistance systems. IEEE transactions on pattern analysis and machine intelligence, Vol. 32, 7 (2009), 1239--1258.
[20]
Jakob Geyer, Yohannes Kassahun, Mentar Mahmudi, Xavier Ricou, Rupesh Durgesh, Andrew S Chung, Lorenz Hauswald, Viet Hoang Pham, Maximilian Mühlegg, Sebastian Dorn, et al. 2020. A2d2: Audi autonomous driving dataset. arXiv preprint arXiv:2004.06320 (2020).
[21]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[22]
Mordechai Guri and Dima Bykhovsky. 2019. air-jumper: Covert air-gap exfiltration/infiltration via security cameras & infrared (ir). Computers & Security, Vol. 82 (2019), 15--29.
[23]
K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770--778.
[24]
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017).
[25]
Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, et al. 2017. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7310--7311.
[26]
Jenq-Neng Hwang and Yu Hen Hu. 2001. Handbook of neural network signal processing .CRC press.
[27]
Car Industry. 2019. Safety First For Automated Driving. https://www.daimler.com/documents/innovation/other/safety-first-for-automated-driving.pdf".
[28]
Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015).
[29]
Uyeong Jang, Xi Wu, and Somesh Jha. 2017. Objective metrics and gradient descent algorithms for adversarial examples in machine learning. In Proceedings of the 33rd Annual Computer Security Applications Conference. 262--277.
[30]
Jiahuan Ji, Baojiang Zhong, and Kai-Kuang Ma. 2019. Multi-Scale Defense of Adversarial Images. In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 4070--4074.
[31]
keen labs. 2019. Tencent Keen Security Lab: Experimental Security Research of Tesla Autopilot. https://keenlab.tencent.com/en/2019/03/29/Tencent-Keen-Security-Lab-Experimental-Security-Research-of-Tesla-Autopilot/.
[32]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016).
[33]
Timothy Lee. [n.d.]. Intel's Mobileye has a plan to dominate self-driving?and it might work. https://arstechnica.com/cars/2020/01/intels-mobileye-has-a-plan-to-dominate-self-driving-and-it-might-work/.
[34]
Timothy Lee. 2019 a. Men hack electronic billboard, play porn on it. https://arstechnica.com/tech-policy/2019/10/men-hack-electronic-billboard-play-porn-on-it/.
[35]
Timothy Lee. 2019 b. Waymo tells riders to get ready for fully driverless rides. https://arstechnica.com/cars/2019/10/waymo-starts-offering-driverless-rides-to-ordinary-riders-in-phoenix/.
[36]
Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. 2016. Ssd: Single shot multibox detector. In European conference on computer vision. Springer, 21--37.
[37]
Bo Luo, Min Li, Yu Li, and Qiang Xu. 2018. On Configurable Defense against Adversarial Example Attacks. arXiv preprint arXiv:1812.02737 (2018).
[38]
Shiqing Ma and Yingqi Liu. 2019. Nic: Detecting adversarial samples with neural network invariant checking. In Proceedings of the 26th Network and Distributed System Security Symposium (NDSS 2019).
[39]
Mobileye. [n.d.]. Mobileye 6-Series - User Manual. http://www.c2sec.com.sg/Files/UserManualMobileye6.pdf.
[40]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2574--2582.
[41]
Seyed-Mohsen Moosavi-Dezfooli, Ashish Shrivastava, and Oncel Tuzel. 2018. Divide, denoise, and defend against adversarial attacks. arXiv preprint arXiv:1802.06806 (2018).
[42]
Nir Morgulis, Alexander Kreines, Shachar Mendelowitz, and Yuval Weisglass. 2019. Fooling a Real Car with Adversarial Traffic Signs. arXiv preprint arXiv:1907.00374 (2019).
[43]
Y. M. Mustafah, R. Noor, H. Hasbi, and A. W. Azma. 2012. Stereo vision images processing for real-time object distance and size measurements. In 2012 International Conference on Computer and Communication Engineering (ICCCE). 659--663. https://doi.org/10.1109/ICCCE.2012.6271270
[44]
Ben Nassi, Raz Ben-Netanel, Adi Shamir, and Yuval Elovici. 2019. Drones' Cryptanalysis-Smashing Cryptography with a Flicker. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 1397--1414.
[45]
Ben Nassi, Adi Shamir, and Yuval Elovici. 2018. Xerox day vulnerability. IEEE Transactions on Information Forensics and Security, Vol. 14, 2 (2018), 415--430.
[46]
Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian Molloy, and Ben Edwards. 2018. Adversarial Robustness Toolbox v1.2.0. CoRR, Vol. 1807.01069 (2018). https://arxiv.org/pdf/1807.01069
[47]
NTSB. 2020. Collision Between a Sport Utility Vehicle Operating With Partial Driving Automation and a Crash Attenuator Mountain View, California. https://www.ntsb.gov/investigations/AccidentReports/Reports/HAR2001.pdf.
[48]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, 372--387.
[49]
Jonathan Petit, Bas Stottelaar, Michael Feiri, and Frank Kargl. 2015. Remote attacks on automated vehicles sensors: Experiments on camera and lidar. Black Hat Europe, Vol. 11 (2015), 2015.
[50]
J. Redmon and A. Farhadi. 2017. YOLO9000: Better, Faster, Stronger. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6517--6525.
[51]
Regulus. [n.d.]. Tesla Model 3 Spoofed off the highway - Regulus Navigation System Hack Causes Car to Turn On Its Own. https://www.regulus.com/blog/tesla-model-3-spoofed-off-the-highway-regulus-researches-hack-navigation-system-causing-car-to-steer-off-road/.
[52]
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems. 91--99.
[53]
Anjali Berdia Simona Shemer. 2019. Self-Driving Spin: Riding In An Autonomous Vehicle Around Tel Aviv. https://nocamels.com/2019/06/autonomous-vehicle-yandex-tech/.
[54]
Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, and Prateek Mittal. 2018. Darts: Deceiving autonomous cars with toxic signs. arXiv preprint arXiv:1802.06430 (2018).
[55]
Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno. 2018. Physical adversarial examples for object detectors. USENIX Workshop on Offensive Technologies (WOOT 18) (2018).
[56]
Jack Stewart. 2018. Why Tesla's Autopilot Can't See a Stopped Firetruck. https://www.wired.com/story/tesla-autopilot-why-crash-radar/.
[57]
Jiachen Sun, Yulong Cao, Qi Alfred Chen, and Z. Morley Mao. 2020 a. Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures. In 29th USENIX Security Symposium (USENIX Security 20). USENIX Association, 877--894. https://www.usenix.org/conference/usenixsecurity20/presentation/sun
[58]
Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. 2020 b. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2446--2454.
[59]
Tesla. [n.d.]. Tesla Vehicle Safety Report. https://www.tesla.com/VehicleSafetyReport.
[60]
Tesla. 2020. Model S - Owner's Manual. https://www.tesla.com/sites/default/files/model_s_owners_manual_north_america_en_us.pdf.
[61]
Security Tutorials. [n.d.]. Hacking Digital Billboards. https://securitytutorials.co.uk/hacking-digital-billboards/.
[62]
Peng Wang, Xinyu Huang, Xinjing Cheng, Dingfu Zhou, Qichuan Geng, and Ruigang Yang. 2019. The apolloscape open dataset for autonomous driving and its application. IEEE transactions on pattern analysis and machine intelligence (2019).
[63]
Shangxi Wu, Jitao Sang, Kaiyuan Xu, Jiaming Zhang, Yanfeng Sun, Liping Jing, and Jian Yu. 2018. Attention, Please! Adversarial Defense via Attention Rectification and Preservation. arXiv preprint arXiv:1811.09831 (2018).
[64]
Takayuki Yamada, Seiichi Gohshi, and Isao Echizen. 2013. Privacy visor: Method for preventing face image detection by using differences in human and device sensitivity. In IFIP International Conference on Communications and Multimedia Security. Springer, 152--161.
[65]
Chen Yan, Wenyuan Xu, and Jianhao Liu. 2016. Can you trust autonomous vehicles: Contactless attacks against sensors of self-driving vehicle. DEF CON, Vol. 24 (2016).
[66]
Yue Zhao, Hong Zhu, Ruigang Liang, Qintao Shen, Shengzhi Zhang, and Kai Chen. 2019. Seeing Isn'T Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (London, United Kingdom) (CCS '19). ACM, New York, NY, USA, 1989--2004. https://doi.org/10.1145/3319535.3354259
[67]
Zhe Zhou, Di Tang, Xiaofeng Wang, Weili Han, Xiangyu Liu, and Kehuan Zhang. 2018. Invisible mask: Practical attacks on face recognition with infrared. arXiv preprint arXiv:1803.04683 (2018).

Cited By

View all
  • (2024)Invisible Optical Adversarial Stripes on Traffic Sign against Autonomous VehiclesProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661854(534-546)Online publication date: 3-Jun-2024
  • (2024)CARLA-GeAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision ModelsIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.341243225:8(9840-9851)Online publication date: Aug-2024
  • (2024)Stealthy and Effective Physical Adversarial Attacks in Autonomous DrivingIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.342292019(6795-6809)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. Phantom of the ADAS: Securing Advanced Driver-Assistance Systems from Split-Second Phantom Attacks

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CCS '20: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security
      October 2020
      2180 pages
      ISBN:9781450370899
      DOI:10.1145/3372297
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 02 November 2020

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. advanced driver-assistance systems
      2. neural-networks
      3. security
      4. split-second phantom attacks

      Qualifiers

      • Research-article

      Conference

      CCS '20
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

      Upcoming Conference

      CCS '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)365
      • Downloads (Last 6 weeks)55
      Reflects downloads up to 14 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Invisible Optical Adversarial Stripes on Traffic Sign against Autonomous VehiclesProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661854(534-546)Online publication date: 3-Jun-2024
      • (2024)CARLA-GeAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision ModelsIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.341243225:8(9840-9851)Online publication date: Aug-2024
      • (2024)Stealthy and Effective Physical Adversarial Attacks in Autonomous DrivingIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.342292019(6795-6809)Online publication date: 2024
      • (2024)PhaDe: Practical Phantom Spoofing Attack Detection for Autonomous VehiclesIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337619219(4199-4214)Online publication date: 2024
      • (2024)Revisiting Automotive Attack Surfaces: a Practitioners’ Perspective2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00080(2348-2365)Online publication date: 19-May-2024
      • (2024)OptiCloak: Blinding Vision-Based Autonomous Driving Systems Through Adversarial Optical ProjectionIEEE Internet of Things Journal10.1109/JIOT.2024.340500611:17(28931-28944)Online publication date: 1-Sep-2024
      • (2024)Optimizing Bayesian Belief Network Analysis for Autonomous Vehicles2024 5th International Conference on Electronics and Sustainable Communication Systems (ICESC)10.1109/ICESC60852.2024.10690041(1484-1489)Online publication date: 7-Aug-2024
      • (2024)DeGhost: Unmasking Phantom Intrusions in Autonomous Recognition Systems2024 IEEE 9th European Symposium on Security and Privacy (EuroS&P)10.1109/EuroSP60621.2024.00013(78-94)Online publication date: 8-Jul-2024
      • (2024)A conceptual framework for automation disengagementsScientific Reports10.1038/s41598-024-57882-614:1Online publication date: 15-Apr-2024
      • (2024)A Theoretically Grounded Extension of Universal Attacks from the Attacker’s ViewpointMachine Learning and Knowledge Discovery in Databases. Research Track10.1007/978-3-031-70359-1_17(283-300)Online publication date: 22-Aug-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media