skip to main content
10.1145/3395352.3402629acmconferencesArticle/Chapter ViewAbstractPublication PageswisecConference Proceedingsconference-collections
research-article

Detecting acoustic backdoor transmission of inaudible messages using deep learning

Published: 16 July 2020 Publication History

Abstract

The novel secret inaudible acoustic communication channel [11], referred to as the BackDoor channel, is a method of embedding inaudible signals in acoustic data that is likely to be processed by a trained deep neural net. In this paper we perform preliminary studies of the detectability of such a communication channel by deep learning algorithms that are trained on the original acoustic data used for such a secret exploit. The BackDoor channel embeds inaudible messages by modulating them with a sinewave of 40kHz and transmitting using ultrasonic speakers. The received composite signal is used to generate the Backdoor dataset for evaluation of our neural net. The audible samples are played back and recorded as a baseline dataset for training. The Backdoor dataset is used to evaluate the impact that the BackDoor channel has on the classification of the acoustic data, and we show that the accuracy of the classifier is degraded. The degradation depends on the type of deep classifier and it appears to impact less the classifiers that are trained using autoencoders. We also propose statistics that can be used to detect the out-of-distribution samples created as a result of the BackDoor channel, such as the log likelihood of the variational autoencoder used to pre-train the classifier or the empirical entropy of the classifier's output layer. The preliminary results presented in this paper indicate that the use of deep learning classifiers as detectors of the BackDoor secret channel merits further research.

References

[1]
Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In Deep Learning and Security Workshop.
[2]
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Xiaodong Song. 2017. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. ArXiv abs/1712.05526 (2017).
[3]
S. Square Enterprise Company Limited Pro-Wave Electronics Corporation. 2019. Air Ultrasonic Ceramic Transducers 400ST/R160. Retrieved May 8, 2020 from http://www.farnell.com/datasheets/l686089.pdf?_ga=2.256607115.1881374495.1588917674-2094016181.1588917674
[4]
I. J Goodfellow, J. Shlens, and C. Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint (2014).
[5]
Google Inc. 2017. Audioset Ontology Human Speech. Retrieved May 9, 2020 from https://research.google.com/audioset///ontology/speech.html
[6]
National Instruments. 2019. NI myDAQ Device Specifications. Retrieved May 8, 2020 from https://www.ni.com/pdf/manuals/373061g.pdf
[7]
S. Kokalj-Filipovic, R. Miller, and G. Vanhoy. 2019. Adversarial Examples in RF Deep Learning: Detection and Physical Robustness. In IEEE Global Conf. on Signal and Inform. Processing (GlobalSIP).
[8]
A. Kurakin, I. Goodfellow, and S. Bengio. 2016. Adversarial examples in the physical world. arXiv preprint (2016).
[9]
Test Equipment Solutions Ltd. 2019. Arbitrary/Function Generators. Retrieved May 8, 2020 from http://www.testequipmenthq.com/datasheets/TEKTRONIX-AFG3021-Datasheet.pdf
[10]
Yao Qin, Nicholas Carlini, Ian J. Goodfellow, Garrison W. Cottrell, and Colin Raffel. 2019. Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition. In ICML.
[11]
Nirupam Roy, Haitham Hassanieh, and Roy Romit Choudhury. 2017. BackDoor: Making Microphones Hear Inaudible Sounds. MobiSys, Article 5 (June 2017).
[12]
Nirupam Roy, Sheng Shen, Haitham Hassanieh, and Romit Roy Choudhury. 2018. Inaudible Voice Commands: The Long-Range Attack and Defense. In 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI 18). USENIX Association, Renton, WA, 547--560. https://www.usenix.org/conference/nsdil8/presentation/roy
[13]
Liwei Song and Prateek Mittal. 2017. POSTER: Inaudible Voice Commands. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (Dallas, Texas, USA) (CCS '17). Association for Computing Machinery, New York, NY, USA, 2583--2585.
[14]
Pete Warden. 2018. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition. ArXiv abs/1804.03209 (2018).
[15]
Hiromu Yakura and Jun Sakuma. 2019. Robust Audio Adversarial Example for a Physical Attack. In IJCAI.
[16]
Yuan, Xuejing et al. 2018. Commandersong: A Systematic Approach for Practical Adversarial Voice Recognition. In the 27th USENIX Conf on Security.
[17]
Guoming Zhang, Chen Yan, Xiaoyu Ji, Tianchen Zhang, Taimin Zhang, and Wenyuan Xu. 2017. Dolphin Attack: Inaudible Voice Commands. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (Dallas, Texas, USA) (CCS '17). Association for Computing Machinery, New York, NY, USA, 103--117.

Cited By

View all
  • (2023)Security and privacy problems in voice assistant applicationsComputers and Security10.1016/j.cose.2023.103448134:COnline publication date: 1-Nov-2023
  • (2021)Inaudible Manipulation of Voice-Enabled Devices Through BackDoor Using Robust Adversarial Audio AttacksProceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning10.1145/3468218.3469048(37-42)Online publication date: 28-Jun-2021

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
WiseML '20: Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning
July 2020
91 pages
ISBN:9781450380072
DOI:10.1145/3395352
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 July 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. BackDoor channel
  2. deep learning
  3. inaudible voice commands
  4. neural networks
  5. ultrasonic acoustics
  6. ultrasound injection

Qualifiers

  • Research-article

Conference

WiSec '20
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Security and privacy problems in voice assistant applicationsComputers and Security10.1016/j.cose.2023.103448134:COnline publication date: 1-Nov-2023
  • (2021)Inaudible Manipulation of Voice-Enabled Devices Through BackDoor Using Robust Adversarial Audio AttacksProceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning10.1145/3468218.3469048(37-42)Online publication date: 28-Jun-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media