skip to main content
10.1145/3488932.3497766acmconferencesArticle/Chapter ViewAbstractPublication Pagesasia-ccsConference Proceedingsconference-collections
research-article

ALEXA VERSUS ALEXA: Controlling Smart Speakers by Self-Issuing Voice Commands

Published: 30 May 2022 Publication History

Abstract

We present ALEXA VERSUS ALEXA (AvA), a novel attack that leverages audio files containing voice commands and audio reproduction methods in an offensive fashion, to gain control of Amazon Echo devices for a prolonged amount of time. AvA leverages the fact that Alexa running on an Echo device correctly interprets voice commands originated from audio files even when they are played by the device itself -- i.e., it leverages a command self-issue vulnerability. Hence, AvA removes the necessity of having a rogue speaker in proximity of the victim's Echo, a constraint that many attacks share. With AvA, an attacker can self-issue any permissible command to Echo, controlling it on behalf of the legitimate user. We have verified that, via AvA, attackers can control smart appliances within the household, buy unwanted items, tamper linked calendars and eavesdrop on the user. We also discovered two additional Echo vulnerabilities, which we call Full Volume and Break Tag Chain. The Full Volume increases the self-issue command recognition rate, by doubling it on average, hence allowing attackers to perform additional self-issue commands. Break Tag Chain increases the time a skill can run without user interaction, from eight seconds to more than one hour, hence enabling attackers to setup realistic social engineering scenarios. By exploiting these vulnerabilities, the adversary can self-issue commands that are correctly executed 99% of the times and can keep control of the device for a prolonged amount of time. We reported these vulnerabilities to Amazon via their vulnerability research program, who rated them with a Medium severity score. In addition, we discuss the results of a set of tests performed on three voluntary Echo-equipped households to verify the feasibility of AvA in real scenarios, finding that the attack remains undetected and operative in most cases. Finally, to assess limitations of AvA on a larger scale, we provide the results of a survey performed on a study group of 18 users, and we show that most of the limitations against AvA are hardly used in practice.

Supplementary Material

MP4 File (ASIA-CCS22-fp141.mp4)
In this video, we present Alexa versus Alexa (AvA), a novel attack against Amazon Echo devices: we found that if an Echo device plays an audio file that contains a voice command, such command is recognized and executed as if it came from the legitimate user. This allows an attacker to control any Echo device from the distance. We illustrate the possible scenarios for the attack, then we analyze the different attack vectors, both local and remote. We also introduce two additional vulnerabilities we found on the Echo device and show that they increase both the success rate and the overall impact of the attack. We show that the attack is feasible by illustrating the results of a field study and of a survey, which highlight that the preconditions for the attack are rather common. Finally, we talk about possible countermeasures for the attack.

References

[1]
Sajjad Abdoli, Luiz G. Hafemann, Jérôme Rony, Ismail Ben Ayed, Patrick Cardinal, and Alessandro L. Koerich. 2019. Universal Adversarial Audio Perturbations. CoRR, Vol. abs/1908.03173 (2019). arxiv: 1908.03173 http://arxiv.org/abs/1908.03173
[2]
Hadi Abdullah, Washington Garcia, Christian Peeters, Patrick Traynor, Kevin R. B. Butler, and Joseph Wilson. 2019 a. Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems. In 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019. The Internet Society. https://www.ndss-symposium.org/ndss-paper/practical-hidden-voice-attacks-against-speech-and-speaker-recognition-systems/
[3]
Hadi Abdullah, Muhammad Sajidur Rahman, Washington Garcia, Logan Blue, Kevin Warren, Anurag Swarnim Yadav, Tom Shrimpton, and Patrick Traynor. 2019 b. Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems. CoRR, Vol. abs/1910.05262 (2019). arxiv: 1910.05262 http://arxiv.org/abs/1910.05262
[4]
Hadi Abdullah, Kevin Warren, Vincent Bindschaedler, Nicolas Papernot, and Patrick Traynor. 2020. SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems. CoRR, Vol. abs/2007.06622 (2020). arxiv: 2007.06622 https://arxiv.org/abs/2007.06622
[5]
Erich Adams. 2018. Avoiding Wake-Word Self-Triggering. https://patents.google.com/patent/US20190311719A1/en. Accessed: 2020--12-04.
[6]
Efthimios Alepis and Constantinos Patsakis. 2017. Monkey Says, Monkey Does: Security and Privacy on Voice Assistants. IEEE Access, Vol. 5 (2017), 17841--17851. https://doi.org/10.1109/ACCESS.2017.2747626
[7]
Moustafa Alzantot, Bharathan Balaji, and Mani B. Srivastava. 2018. Did you hear that? Adversarial Examples Against Automatic Speech Recognition. CoRR, Vol. abs/1801.00554 (2018). arxiv: 1801.00554 http://arxiv.org/abs/1801.00554
[8]
Mary K. Bispham, Ioannis Agrafiotis, and Michael Goldsmith. 2019. Nonsense Attacks on Google Assistant and Missense Attacks on Amazon Alexa. In Proceedings of the 5th International Conference on Information Systems Security and Privacy, ICISSP 2019, Prague, Czech Republic, February 23-25, 2019, Paolo Mori, Steven Furnell, and Olivier Camp (Eds.). SciTePress, 75--87. https://doi.org/10.5220/0007309500750087
[9]
Logan Blue, Luis Vargas, and Patrick Traynor. 2018. Hello, Is It Me You're Looking For? Differentiating Between Human and Electronic Speakers for Voice Interface Security. In Proceedings of the 11th ACM Conference on Security and Privacy in Wireless and Mobile Networks (Stockholm, Sweden) (WiSec '18). Association for Computing Machinery, New York, NY, USA, 123--133.https://doi.org/10.1145/3212480.3212505
[10]
Nicholas Carlini, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David Wagner, and Wenchao Zhou. 2016. Hidden Voice Commands. In 25th USENIX Security Symposium (USENIX Security 16). USENIX Association, Austin, TX, 513--530. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/carlini
[11]
Nicholas Carlini and David A. Wagner. 2018. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In 2018 IEEE Security and Privacy Workshops, SP Workshops 2018, San Francisco, CA, USA, May 24, 2018. IEEE Computer Society, 1--7. https://doi.org/10.1109/SPW.2018.00009
[12]
Guangke Chen, Sen Chen, Lingling Fan, Xiaoning Du, Zhe Zhao, Fu Song, and Yang Liu. 2019 a. Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems. CoRR, Vol. abs/1911.01840 (2019). arxiv: 1911.01840 http://arxiv.org/abs/1911.01840
[13]
Yuxuan Chen, Xuejing Yuan, Jiangshan Zhang, Yue Zhao, Shengzhi Zhang, Kai Chen, and XiaoFeng Wang. 2019 b. Devil's Whisper Docker Hub. https://hub.docker.com/repository/docker/neeze/devilwhisper. Accessed: 2021-01--25.
[14]
Yuxuan Chen, Xuejing Yuan, Jiangshan Zhang, Yue Zhao, Shengzhi Zhang, Kai Chen, and XiaoFeng Wang. 2020. Deviltextquoterights Whisper: A General Approach for Physical Adversarial Attacks against Commercial Black-box Speech Recognition Devices. In 29th USENIX Security Symposium (USENIX Security 20). USENIX Association, 2667--2684. https://www.usenix.org/conference/usenixsecurity20/presentation/chen-yuxuan
[15]
Long Cheng, Christin Wilson, Song Liao, Jeffrey Young, Daniel Dong, and Hongxin Hu. 2020. Dangerous Skills Got Certified: Measuring the Trustworthiness of Skill Certification in Voice Personal Assistant Platforms. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security (Virtual Event, USA) (CCS '20). Association for Computing Machinery, New York, NY, USA, 1699--1716. https://doi.org/10.1145/3372297.3423339
[16]
Moustapha Cisse, Yossi Adi, Natalia Neverova, and Joseph Keshet. 2017. Houdini: Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, 6980--6990.
[17]
Wenrui Diao, Xiangyu Liu, Zhe Zhou, and Kehuan Zhang. 2014. Your Voice Assistant is Mine: How to Abuse Speakers to Steal Information and Control Your Phone. In Proceedings of the 4th ACM Workshop on Security and Privacy in Smartphones & Mobile Devices, SPSM@CCS 2014, Scottsdale, AZ, USA, November 03 - 07, 2014, Cliff Wang, Dijiang Huang, Kapil Singh, and Zhenkai Liang (Eds.). ACM, 63--74. https://doi.org/10.1145/2666620.2666623
[18]
Brian Dorey. 2019. Echo Dot 3rd Gen Smart Speaker Teardown. https://www.briandorey.com/post/echo-dot-3rd-gen-smart-speaker-teardown. Accessed: 2021-02-03.
[19]
Tianyu Du, Shouling Ji, Jinfeng Li, Qinchen Gu, Ting Wang, and Raheem Beyah. 2020. SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems. In ASIA CCS '20: The 15th ACM Asia Conference on Computer and Communications Security, Taipei, Taiwan, October 5-9, 2020, Hung-Min Sun, Shiuh-Pyng Shieh, Guofei Gu, and Giuseppe Ateniese (Eds.). ACM, 357--369. https://doi.org/10.1145/3320269.3384733
[20]
Matthew Hoy. 2018. Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants. Medical Reference Services Quarterly, Vol. 37 (01 2018), 81--88. https://doi.org/10.1080/02763869.2018.1404391
[21]
Amazon.com Inc. 2017. Github - Alexa Voice Service Client: Python Client for Alexa Voice Service (AVS). https://github.com/richtier/alexa-voice-service-client. Accessed: 2021-01--25.
[22]
Amazon.com Inc. 2019. Speech Synthesis Markup Language (SSML) Reference - Alexa Skills Kit. https://developer.amazon.com/en-US/docs/alexa/custom-skills/speech-synthesis-markup-language-ssml-reference.html. Accessed: 2021-01-25.
[23]
Amazon.com Inc. 2020 a. Request and Response JSON Reference - Alexa Skills Kit. https://developer.amazon.com/en-GB/docs/alexa/custom-skills/request-and-response-json-reference.html. Accessed: 2021-01--25.
[24]
Statista Inc. 2020 b. Smart Home - Worldwide. https://www.statista.com/outlook/283/100/smart-home/worldwide. Accessed: 2020-09-18.
[25]
Statista Inc. 2020 c. Smart Speakers - Statistics & Facts. https://www.statista.com/topics/4748/smart-speakers. Accessed: 2020-09--18.
[26]
Yeongjin Jang, Chengyu Song, Simon P. Chung, Tielei Wang, and Wenke Lee. 2014. A11y Attacks: Exploiting Accessibility in Operating Systems. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (Scottsdale, Arizona, USA) (CCS '14). Association for Computing Machinery, New York, NY, USA, 103--115. https://doi.org/10.1145/2660267.2660295
[27]
F. Kreuk, Y. Adi, M. Cisse, and J. Keshet. 2018. Fooling End-To-End Speaker Verification With Adversarial Examples. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 1962--1966. https://doi.org/10.1109/ICASSP.2018.8462693
[28]
Deepak Kumar, Riccardo Paccagnella, Paul Murley, Eric Hennenfent, Joshua Mason, Adam Bates, and Michael Bailey. 2018. Skill Squatting Attacks on Amazon Alexa. In 27th USENIX Security Symposium (USENIX Security 18). USENIX Association, Baltimore, MD, 33-47. https://www.usenix.org/conference/usenixsecurity18/presentation/kumar
[29]
Cheng-I Lai, Nanxin Chen, Jesú s Villalba, and Najim Dehak. 2019. ASSERT: Anti-Spoofing with Squeeze-Excitation and Residual Networks. In Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019, Gernot Kubin and Zdravko Kacic (Eds.). ISCA, 1013--1017. https://doi.org/10.21437/Interspeech.2019--1794
[30]
Jonathan P. Lang. 2017. Wake-word detection suppression. https://patents.google.com/patent/US10475449B2/en. Accessed: 2020-12-04.
[31]
Christopher Lentzsch, Sheel Jayesh Shah, Benjamin Andow, Martin Degeling, Anupam Das, and William Enck. 2021. Hey Alexa, is this Skill Safe?: Taking a Closer Look at the Alexa Skill Ecosystem. In Proceedings of the 28th ISOC Annual Network and Distributed Systems Symposium (NDSS).
[32]
Juncheng Li, Shuhui Qu, Xinjian Li, Joseph Szurley, J. Zico Kolter, and Florian Metze. 2019. Adversarial Music: Real world Audio Adversary against Wake-word Detection System. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché -Buc, Emily B. Fox, and Roman Garnett (Eds.). 11908--11918. https://proceedings.neurips.cc/paper/2019/hash/ebbdfea212e3a756a1fded7b35578525-Abstract.html
[33]
Google LLC. 2013. Introduction to Audio Encoding - Cloud Speech-to-Text. https://cloud.google.com/speech-to-text/docs/encoding. Accessed: 2021-01--25.
[34]
Richard Mitev, Markus Miettinen, and Ahmad-Reza Sadeghi. 2019. Alexa Lied to Me: Skill-Based Man-in-the-Middle Attacks on Virtual Assistants. In Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security (Auckland, New Zealand) (Asia CCS '19). Association for Computing Machinery, New York, NY, USA, 465--478. https://doi.org/10.1145/3321705.3329842
[35]
Andreas Nautsch, Xin Wang, Nicholas W. D. Evans, Tomi Kinnunen, Ville Vestman, Massimiliano Todisco, Hé ctor Delgado, Md. Sahidullah, Junichi Yamagishi, and Kong Aik Lee. 2021. ASVspoof 2019: spoofing countermeasures for the detection of synthesized, converted and replayed speech. CoRR, Vol. abs/2102.05889 (2021). arxiv: 2102.05889 https://arxiv.org/abs/2102.05889
[36]
Michael Alan Pogue and Philip Ryan Hilmes. 2013. Detecting self-generated wake expressions. https://patents.google.com/patent/US9747899B2/en. Accessed: 2020--12-04.
[37]
Yao Qin, Nicholas Carlini, Garrison W. Cottrell, Ian J. Goodfellow, and Colin Raffel. 2019. Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9--15 June 2019, Long Beach, California, USA (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 5231--5240. http://proceedings.mlr.press/v97/qin19a.html
[38]
Lea Schö nherr, Thorsten Eisenhofer, Steffen Zeiler, Thorsten Holz, and Dorothea Kolossa. 2020. Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems. In ACSAC '20: Annual Computer Security Applications Conference, Virtual Event / Austin, TX, USA, 7-11 December, 2020. ACM, 843--855. https://doi.org/10.1145/3427228.3427276
[39]
Lea Schö nherr, Katharina Kohls, Steffen Zeiler, Thorsten Holz, and Dorothea Kolossa. 2019. Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding. In 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019. The Internet Society. https://www.ndss-symposium.org/ndss-paper/adversarial-attacks-against-automatic-speech-recognition-systems-via-psychoacoustic-hiding/
[40]
Dan Su, Jiqiang Liu, Sencun Zhu, Xiaoyang Wang, and Wei Wang. 2020. "Are you home alone?" "Yes" Disclosing Security and Privacy Vulnerabilities in Alexa Skills. CoRR, Vol. abs/2010.10788 (2020). arxiv: 2010.10788 https://arxiv.org/abs/2010.10788
[41]
Takeshi Sugawara, Benjamin Cyr, Sara Rampazzi, Daniel Genkin, and Kevin Fu. 2020. Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems. In 29th USENIX Security Symposium (USENIX Security 20). USENIX Association, 2631--2648. https://www.usenix.org/conference/usenixsecurity20/presentation/sugawara
[42]
Rohan Taori, Amog Kamsetty, Brenton Chu, and Nikita Vemuri. 2019. Targeted Adversarial Examples for Black Box Audio Systems. In 2019 IEEE Security and Privacy Workshops, SP Workshops 2019, San Francisco, CA, USA, May 19-23, 2019. IEEE, 15--20. https://doi.org/10.1109/SPW.2019.00016
[43]
Tavish Vaidya, Yuankai Zhang, Micah Sherr, and Clay Shields. 2015. Cocaine Noodles: Exploiting the Gap between Human and Machine Speech Recognition. In Proceedings of the 9th USENIX Conference on Offensive Technologies (Washington, D.C.) (WOOT'15). USENIX Association, USA, 16.
[44]
Yao Wang, Wandong Cai, Tao Gu, Wei Shao, Yannan Li, and Yong Yu. 2019. Secure Your Voice: An Oral Airflow-Based Continuous Liveness Detection for Voice Assistants. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 3, 4, Article 157 (Dec. 2019), 28 pages. https://doi.org/10.1145/3369811
[45]
Hiromu Yakura and Jun Sakuma. 2019. Robust Audio Adversarial Example for a Physical Attack. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (Aug 2019). https://doi.org/10.24963/ijcai.2019/741
[46]
Qiben Yan, Kehai Liu, Qin Zhou, Hanqing Guo, and Ning Zhang. 2020. SurfingAttack: Interactive Hidden Attack on Voice Assistants Using Ultrasonic Guided Waves. In 27th Annual Network and Distributed System Security Symposium, NDSS 2020, San Diego, California, USA, February 23-26, 2020. The Internet Society. https://www.ndss-symposium.org/ndss-paper/surfingattack-interactive-hidden-attack-on-voice-assistants-using-ultrasonic-guided-waves/
[47]
Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, XiaoFeng Wang, and Carl A. Gunter. 2018. Commandersong: A Systematic Approach for Practical Adversarial Voice Recognition. In Proceedings of the 27th USENIX Conference on Security Symposium (Baltimore, MD, USA) (SEC'18). USENIX Association, USA, 49--64.
[48]
Guoming Zhang, Chen Yan, Xiaoyu Ji, Tianchen Zhang, Taimin Zhang, and Wenyuan Xu. 2017. DolphinAttack: Inaudible Voice Commands. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (Dallas, Texas, USA) (CCS '17). Association for Computing Machinery, New York, NY, USA, 103--117. https://doi.org/10.1145/3133956.3134052
[49]
N. Zhang, X. Mi, X. Feng, X. Wang, Y. Tian, and F. Qian. 2019. Dangerous Skills: Understanding and Mitigating Security Risks of Voice-Controlled Third-Party Functions on Virtual Personal Assistant Systems. In 2019 IEEE Symposium on Security and Privacy (SP). 1381--1396.

Cited By

View all
  • (2024)Understanding GDPR Non-Compliance in Privacy Policies of Alexa Skills in European MarketplacesProceedings of the ACM Web Conference 202410.1145/3589334.3645409(1081-1091)Online publication date: 13-May-2024
  • (2024)Manipulating Voice Assistants Eavesdropping via Inherent Vulnerability Unveiling in Mobile SystemsIEEE Transactions on Mobile Computing10.1109/TMC.2024.340109623:12(11549-11563)Online publication date: Dec-2024
  • (2024)Station: Gesture-Based Authentication for Voice InterfacesIEEE Internet of Things Journal10.1109/JIOT.2024.338272111:12(22668-22683)Online publication date: 15-Jun-2024
  • Show More Cited By

Index Terms

  1. ALEXA VERSUS ALEXA: Controlling Smart Speakers by Self-Issuing Voice Commands

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ASIA CCS '22: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security
      May 2022
      1291 pages
      ISBN:9781450391405
      DOI:10.1145/3488932
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 30 May 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. alexa skills
      2. iot
      3. self-activation
      4. smart speakers
      5. voice commands

      Qualifiers

      • Research-article

      Funding Sources

      • Project MEGABIT
      • PhD studentship from Royal Holloway University of London

      Conference

      ASIA CCS '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 418 of 2,322 submissions, 18%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)53
      • Downloads (Last 6 weeks)6
      Reflects downloads up to 25 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Understanding GDPR Non-Compliance in Privacy Policies of Alexa Skills in European MarketplacesProceedings of the ACM Web Conference 202410.1145/3589334.3645409(1081-1091)Online publication date: 13-May-2024
      • (2024)Manipulating Voice Assistants Eavesdropping via Inherent Vulnerability Unveiling in Mobile SystemsIEEE Transactions on Mobile Computing10.1109/TMC.2024.340109623:12(11549-11563)Online publication date: Dec-2024
      • (2024)Station: Gesture-Based Authentication for Voice InterfacesIEEE Internet of Things Journal10.1109/JIOT.2024.338272111:12(22668-22683)Online publication date: 15-Jun-2024
      • (2024)A Closer Look at Access Control in Multi-User Voice SystemsIEEE Access10.1109/ACCESS.2024.337914112(40933-40946)Online publication date: 2024
      • (2024)Exploring Vulnerabilities in Voice Command Skills for Connected VehiclesSecurity and Privacy in Cyber-Physical Systems and Smart Vehicles10.1007/978-3-031-51630-6_1(3-14)Online publication date: 5-Feb-2024
      • (2023)SkillScanner: Detecting Policy-Violating Voice Applications Through Static Analysis at the Development PhaseProceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security10.1145/3576915.3616650(2321-2335)Online publication date: 15-Nov-2023
      • (2023)Protecting Voice-Controllable Devices Against Self-Issued Voice Commands2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P)10.1109/EuroSP57164.2023.00019(160-174)Online publication date: Jul-2023
      • (2023)Fallbeispiele erfolgreicher AngriffeComputer Hacking10.1007/978-3-662-67030-9_14(251-266)Online publication date: 28-Sep-2023
      • (2023)The VOCODES Kill Chain for Voice Controllable DevicesComputer Security. ESORICS 2023 International Workshops10.1007/978-3-031-54129-2_11(176-197)Online publication date: 25-Sep-2023
      • (2022)Artificial Intelligence Governance: A Study on the Ethical and Security Issues that Arise2022 International Conference on Computing, Electronics & Communications Engineering (iCCECE)10.1109/iCCECE55162.2022.9875082(104-111)Online publication date: 17-Aug-2022

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media