Skip to main content

Trojan Attacks and Defense for Speech Recognition

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1544))

Abstract

Mobile devices commonly employ speech recognition (SR) techniques to facilitate user interaction. Typical voice assistants on mobile devices detect a wake word or phrase before allowing users to use voice commands. While the core functionality of contemporary SR systems relies on deep learning, researchers have shown that deep learning suffers from various security issues. Among these security threats, Trojan attacks in particular have attracted great interest in the research community. To conduct a Trojan attack, an adversary must stealthily modify a target model, such that the compromised model will output a predefined label whenever presented with a trigger. Most work in the literature has focused on Trojan attacks for image recognition, and there is limited work in the SR domain. Due to the increasing use of SR systems in daily devices, such as mobile phones, Trojan attacks for SR pose a great threat to the public and is therefore an important topic of concern to mobile internet security. Despite its growing importance, there has not been an extensive review conducted on Trojan attacks for SR. This paper fills this gap by presenting an overview of existing techniques of conducting Trojan attacks and defending against them for SR. The purpose is to provide researchers with an in-depth comparison of current methods and the challenges faced in this important research area.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://github.com/PurduePAML/TrojanNN.

References

  1. Amodei, D., et al.: Deep speech 2: end-to-end speech recognition in English and Mandarin. In International Conference on Machine Learning, pp. 173–182 (2016)

    Google Scholar 

  2. Chan, W., Jaitly, N., Le, Q., Vinyals, O.: Listen, attend and spell: a neural network for large vocabulary conversational speech recognition. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, Shanghai, China, 20–25 March 2016, pp. 4960–4964. IEEE (2016)

    Google Scholar 

  3. Chen, T., Shangguan, L., Li, Z., Jamieson, K.: Metamorph: injecting inaudible commands into over-the-air voice controlled systems. In: 27th Annual Network and Distributed System Security Symposium, NDSS 2020, San Diego, California, USA, 23–26 February 2020. The Internet Society (2020)

    Google Scholar 

  4. Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)

  5. Dai, J., Chen, C., Li, Y.: A backdoor attack against LSTM-based text classification systems. IEEE Access 7, 138872–138878 (2019)

    Article  Google Scholar 

  6. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  7. Gao, Y., et al.: Design and evaluation of a multi-domain trojan detection method on deep neural networks. IEEE Trans. Dependable Secure Comput. (2021)

    Google Scholar 

  8. Gu, T., Dolan-Gavitt, B., Garg, S.: Badnets: identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017)

  9. Hannun, A., et al.: Deep speech: scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567 (2014)

  10. Jeub, M., Schafer, M., Vary, P.: A binaural room impulse response database for the evaluation of dereverberation algorithms. In: 2009 16th International Conference on Digital Signal Processing, pp. 1–5. IEEE (2009)

    Google Scholar 

  11. Kinoshita, K., et al.: The reverb challenge: a common evaluation framework for dereverberation and recognition of reverberant speech. In: 2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pp. 1–4. IEEE (2013)

    Google Scholar 

  12. Li, M., et al.: A novel trojan attack against co-learning based ASR DNN system. In: 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD), pp. 907–912. IEEE (2021)

    Google Scholar 

  13. Li, S., Xue, M., Zhao, B., Zhu, H., Zhang, X.: Invisible backdoor attacks on deep neural networks via steganography and regularization. IEEE Trans. Dependable Secure Comput. (2020)

    Google Scholar 

  14. Li, Y., Zhai, T., Wu, B., Jiang, Y., Li, Z., Xia, S.: Rethinking the trigger of backdoor attack. arXiv preprint arXiv:2004.04692 (2020)

  15. Liao, C., Zhong, H., Squicciarini, A., Zhu, S., Miller, D.: Backdoor embedding in convolutional neural network models via invisible perturbation. arXiv preprint arXiv:1808.10307 (2018)

  16. Liu, K., Dolan-Gavitt, B., Garg, S.: Fine-pruning: defending against backdooring attacks on deep neural networks. In: Bailey, M., Holz, T., Stamatogiannakis, M., Ioannidis, S. (eds.) RAID 2018. LNCS, vol. 11050, pp. 273–294. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00470-5_13

    Chapter  Google Scholar 

  17. Liu, Y., et al.: Trojaning attack on neural networks. In: 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, 18–21 February 2018. The Internet Society (2018)

    Google Scholar 

  18. Liu, Y., Ma, X., Bailey, J., Lu, F.: Reflection backdoor: a natural backdoor attack on deep neural networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12355, pp. 182–199. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_11

    Chapter  Google Scholar 

  19. Saha, A., Subramanya, A., Pirsiavash, H.: Hidden trigger backdoor attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11957–11965 (2020)

    Google Scholar 

  20. Schönherr, L., Eisenhofer, T., Zeiler, S., Holz, T., Kolossa, D.: Imperio: robust over-the-air adversarial examples for automatic speech recognition systems. In: ACSAC ’20: Annual Computer Security Applications Conference, Virtual Event/Austin, TX, USA, 7–11 December, 2020, pp. 843–855. ACM (2020)

    Google Scholar 

  21. Snyder, D., Garcia-Romero, D., Sell, G., McCree, A., Povey, D., Khudanpur, S.: Speaker recognition for multi-speaker conversations using x-vectors. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5796–5800. IEEE (2019)

    Google Scholar 

  22. Snyder, D., Garcia-Romero, D., Sell, G., Povey, D., Khudanpur, S.: X-vectors: robust DNN embeddings for speaker recognition. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5329–5333. IEEE (2018)

    Google Scholar 

  23. Tang, R., Du, M., Liu, N., Yang, F., Hu, X.: An embarrassingly simple approach for trojan attack in deep neural networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 218–228 (2020)

    Google Scholar 

  24. Targ, S., Almeida, D., Lyman, K.: Resnet in resnet: generalizing residual architectures. arXiv preprint arXiv:1603.08029 (2016)

  25. Wan, L., Wang, Q., Papir, A., Moreno, I.L.: Generalized end-to-end loss for speaker verification. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4879–4883. IEEE (2018)

    Google Scholar 

  26. Wang, X., Ren, J., Lin, S., Zhu, X., Wang, Y., Zhang, Q.: Towards a unified understanding and improving of adversarial transferability. arXiv preprint arXiv:2010.04055 (2020)

  27. Xiao, C., Zhu, J., Li, B., He, W., Liu, M., Song, D.: Spatially transformed adversarial examples. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference Track Proceedings. OpenReview.net (2018)

    Google Scholar 

  28. Yakura, H., Sakuma, J.: Robust audio adversarial example for a physical attack. arXiv preprint arXiv:1810.11793 (2018)

  29. Yuan, X., et al.: CommanderSong: a systematic approach for practical adversarial voice recognition. In 27th USENIX Security Symposium (USENIX Security 18), pp. 49–64 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Zong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zong, W., Chow, YW., Susilo, W., Kim, J. (2022). Trojan Attacks and Defense for Speech Recognition. In: You, I., Kim, H., Youn, TY., Palmieri, F., Kotenko, I. (eds) Mobile Internet Security. MobiSec 2021. Communications in Computer and Information Science, vol 1544. Springer, Singapore. https://doi.org/10.1007/978-981-16-9576-6_14

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-9576-6_14

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-9575-9

  • Online ISBN: 978-981-16-9576-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics