skip to main content
10.1145/3400302.3418782acmconferencesArticle/Chapter ViewAbstractPublication PagesiccadConference Proceedingsconference-collections
invited-talk
Public Access

New passive and active attacks on deep neural networks in medical applications

Published: 17 December 2020 Publication History

Abstract

Security of deep neural network (DNN) inference engines, i.e., trained DNN models on various platforms, has become one of the biggest challenges in deploying artificial intelligence in domains where privacy, safety, and reliability are of paramount importance, such as in medical applications. In addition to classic software attacks such as model inversion and evasion attacks, recently a new attack surface---implementation attacks which include both passive side-channel attacks and active fault injection and adversarial attacks---is arising, targeting implementation peculiarities of DNN to breach their confidentiality and integrity. This paper presents several novel passive and active attacks on DNN we have developed and tested over medical datasets. Our new attacks reveal a largely under-explored attack surface of DNN inference engines. Insights gained during attack exploration will provide valuable guidance for effectively protecting DNN execution against reverse-engineering and integrity violations.

References

[1]
Marc Andrysco, David Kohlbrenner, Keaton Mowery, Ranjit Jhala, Sorin Lerner, and Hovav Shacham. 2015. On Subnormal Floating Point and Abnormal Timing. In 2015 IEEE Symposium on Security and Privacy, SP 2015, San Jose, CA, USA, May 17--21, 2015. IEEE Computer Society, 623--639.
[2]
Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018).
[3]
Alessandro Barenghi, Luca Breveglieri, Israel Koren, and David Naccache. 2012. Fault injection attacks on cryptographic devices: Theory, practice, and counter-measures. Proc. IEEE 100, 11 (2012), 3056--3076.
[4]
Lejla Batina, Shivam Bhasin, Dirmanto Jap, and Stjepan Picek. 2019. CSI NN: Reverse Engineering of Neural Network Architectures Through Electromagnetic Side Channel. In 28th USENIX Security Symposium (USENIX Security 19). USENIX Association, Santa Clara, CA, 515--532. https://www.usenix.org/conference/usenixsecurity19/presentation/batina
[5]
Jakub Breier, Xiaolu Hou, and et.al. 2018. Practical fault attack on deep neural networks. In CCS. 2204--2206.
[6]
Eric Brier, Christophe Clavier, and Francis Olivier. 2004. Correlation power analysis with a leakage model. In International workshop on cryptographic hardware and embedded systems. Springer, 16--29.
[7]
Sebastien Carr\'e, Victor Dyseryn, Adrien Facon, Sylvain Guilley, and Thomas Perianin. 2019. End-to-end automated cache-timing attack driven by Machine Learning. In Proceedings of 8th International Workshop on Security Proofs for Embedded Systems (Kalpa Publications in Computing, Vol. 11), Karine Heydemann, Ulrich K\"uhne, and Letitia Li (Eds.). EasyChair, 1--16.
[8]
Daniel Genkin, Adi Shamir, and Eran Tromer. 2014. RSA key extraction via low-bandwidth acoustic cryptanalysis. In Annual Cryptology Conference. Springer, 444--461.
[9]
Cheng Gongye, Yunsi Fei, and Thomas Wahl. 2020. Reverse-Engineering Deep Neural Networks Using Floating-Point Timing Side-Channels. In Proc. Design Automation Conference.
[10]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[11]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR).
[12]
Daniel Gruss, Clémentine Maurice, Klaus Wagner, and Stefan Mangard. 2016. Flush+Flush: A Fast and Stealthy Cache Attack. Lecture Notes in Computer Science (2016), 279--299.
[13]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[14]
Weizhe Hua, Zhiru Zhang, and G. Suh. 2018. Reverse engineering convolutional neural networks through side-channel information leaks. 1--6.
[15]
IEEE Standard Association. 2008. IEEE Standard for Floating-Point Arithmetic., 58 pages. http://grouper.ieee.org/groups/754/.
[16]
Yeongjin Jang, Jaehyuk Lee, Sangho Lee, and Taesoo Kim. 2017. SGX-Bomb: Locking down the processor via Rowhammer attack. In Proceedings of the 2nd Workshop on System Software for Trusted Execution. 1--6.
[17]
Tahar Jarboui, Jean Arlat, Yves Crouzet, and Karama Kanoun. 2002. Experimental analysis of the errors induced into linux by three fault injection techniques. In Proceedings International Conference on Dependable Systems and Networks. IEEE, 331--336.
[18]
Timo Kasper, David Oswald, and Christof Paar. 2009. EM side-channel attacks on commercial contactless smartcards using low-cost equipment. In International Workshop on Information Security Applications. Springer, 79--93.
[19]
Mingyu Kim, Jihye Yun, Yongwon Cho, Keewon Shin, Ryoungwoo Jang, Hyun-jin Bae, and Namkug Kim. 2019. Deep learning in medical imaging. Neurospine 16, 4 (2019), 657.
[20]
Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. 2014. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors. ACM SIGARCH Computer Architecture News 42, 3 (2014), 361--372.
[21]
June-Goo Lee, Sanghoon Jun, Young-Won Cho, Hyunna Lee, Guk Bae Kim, Joon Beom Seo, and Namkug Kim. 2017. Deep learning in medical imaging: general overview. Korean journal of radiology 18, 4 (2017), 570--584.
[22]
Fangfei Liu, Yuval Yarom, Qian Ge, Gernot Heiser, and Ruby B. Lee. 2015. Last-Level Cache Side-Channel Attacks Are Practical. In Proceedings of the 2015 IEEE Symposium on Security and Privacy (SP '15). IEEE Computer Society, USA, 605--622.
[23]
Yannan Liu, Lingxiao Wei, Bo Luo, and Qiang Xu. 2017. Fault injection attack on deep neural network. 131--138.
[24]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[25]
Philippe Maurine. 2012. Techniques for em fault injection: equipments and experimental results. In 2012 Workshop on Fault Diagnosis and Tolerance in Cryptography. IEEE, 3--4.
[26]
Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, et al. 2017. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225 (2017).
[27]
Ashay Rane, Calvin Lin, and Mohit Tiwari. 2016. Secure, Precise, and Fast Floating-Point Operations on x86 Processors. In USENIX Security Symp. 71--86.
[28]
Majid Sabbagh, Yunsi Fei, and David Kaeli. 2020. A Novel GPU Overdrive Fault Attack. In Proc. Design Automation Conference.
[29]
M. Sabbagh, C. Gongye, Y. Fei, and Y. Wang. 2019. Evaluating Fault Resiliency of Compressed Deep Neural Networks. In IEEE Int. Conf. on Embedded Software & Systems (ICESS).
[30]
Hoo-Chang Shin, Kirk Roberts, Le Lu, Dina Demner-Fushman, Jianhua Yao, and Ronald M Summers. 2016. Learning to read chest x-rays: Recurrent neural cascade model for automated image annotation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2497--2506.
[31]
Adrian Tang, Simha Sethumadhavan, and Salvatore Stolfo. 2017. CLKSCREW: Exposing the Perils of Security-Oblivious Energy Management. In 26th USENIX Security Symposium (USENIX Security 17). USENIX Association, Vancouver, BC, 1057--1074. https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/tang
[32]
Niek Timmers and Cristofaro Mune. 2017. Escalating privileges in linux using voltage fault injection. In 2017 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC). IEEE, 1--8.
[33]
Jasper GJ Van Woudenberg, Marc F Witteman, and Federico Menarini. 2011. Practical optical fault injection on secure microcontrollers. In 2011 Workshop on Fault Diagnosis and Tolerance in Cryptography. IEEE, 91--99.
[34]
Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, and Ronald M Summers. 2018. Tienet: Text-image embedding network for common thorax disease classification and reporting in chest x-rays. In Proceedings of the IEEE conference on computer vision and pattern recognition. 9049--9058.
[35]
Zhang Xianyi, Wang Qian, and Zaheer Chothia. 2019. OpenBLAS. http://www.openblas.net/
[36]
Weilin Xu, David Evans, and Yanjun Qi. 2017. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017).
[37]
Mengjia Yan, Christopher W. Fletcher, and Josep Torrellas. 2020. Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures., 2003--2020 pages. https://www.usenix.org/conference/usenixsecurity20/presentation/yan
[38]
Yuval Yarom and Katrina Falkner. 2014. FLUSH+RELOAD: A High Resolution, Low Noise, L3 Cache Side-Channel Attack. In 23rd USENIX Security Symposium (USENIX Security 14). USENIX Association, San Diego, CA, 719--732. https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/yarom
[39]
Pu Zhao, Siyue Wang, Cheng Gongye, Yanzhi Wang, Yunsi Fei, and Xue Lin. 2019. Fault sneaking attack: a stealthy framework for misleading deep neural networks. In Proc. Design Automation Conf.
[40]
Yi Zhong. 2020. COVID-19-CXR: A Special And Simple DCNN Models. https://github.com/ZY-ZRY/COVID19-CXR.
[41]
Yi Zhong. 2020. Using Deep Convolutional Neural Networks to Diagnose COVID-19 From Chest X-Ray Images. arXiv preprint arXiv:2007.09695 (2020).

Cited By

View all
  • (2025)Threats to medical diagnosis systems: analyzing targeted adversarial attacks in deep learning-based COVID-19 diagnosisSoft Computing10.1007/s00500-025-10516-z29:3(1879-1896)Online publication date: 17-Feb-2025
  • (2024)Survey on Adversarial Attack and Defense for Medical Image Analysis: Methods and ChallengesACM Computing Surveys10.1145/370263857:3(1-38)Online publication date: 22-Nov-2024
  • (2024)Removing Adversarial Noise in X-ray Images via Total Variation Minimization and Patch-Based Regularization for Robust Deep Learning-based DiagnosisJournal of Imaging Informatics in Medicine10.1007/s10278-023-00919-537:6(3282-3303)Online publication date: 17-Jun-2024
  • Show More Cited By

Index Terms

  1. New passive and active attacks on deep neural networks in medical applications

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICCAD '20: Proceedings of the 39th International Conference on Computer-Aided Design
    November 2020
    1396 pages
    ISBN:9781450380263
    DOI:10.1145/3400302
    • General Chair:
    • Yuan Xie
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    In-Cooperation

    • IEEE CAS
    • IEEE CEDA
    • IEEE CS

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 December 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. deep neural networks
    2. fault injection attacks
    3. side-channel attacks

    Qualifiers

    • Invited-talk

    Funding Sources

    • National Science Foundation

    Conference

    ICCAD '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 457 of 1,762 submissions, 26%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)142
    • Downloads (Last 6 weeks)18
    Reflects downloads up to 01 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Threats to medical diagnosis systems: analyzing targeted adversarial attacks in deep learning-based COVID-19 diagnosisSoft Computing10.1007/s00500-025-10516-z29:3(1879-1896)Online publication date: 17-Feb-2025
    • (2024)Survey on Adversarial Attack and Defense for Medical Image Analysis: Methods and ChallengesACM Computing Surveys10.1145/370263857:3(1-38)Online publication date: 22-Nov-2024
    • (2024)Removing Adversarial Noise in X-ray Images via Total Variation Minimization and Patch-Based Regularization for Robust Deep Learning-based DiagnosisJournal of Imaging Informatics in Medicine10.1007/s10278-023-00919-537:6(3282-3303)Online publication date: 17-Jun-2024
    • (2024)Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology ImagesJournal of Imaging Informatics in Medicine10.1007/s10278-023-00916-837:1(308-338)Online publication date: 10-Jan-2024
    • (2023)Resilience and Resilient Systems of Artificial Intelligence: Taxonomy, Models and MethodsAlgorithms10.3390/a1603016516:3(165)Online publication date: 18-Mar-2023
    • (2023)AI-based radiodiagnosis using chest X-rays: A reviewFrontiers in Big Data10.3389/fdata.2023.11209896Online publication date: 6-Apr-2023
    • (2022)Electrical-Level Attacks on CPUs, FPGAs, and GPUs: Survey and Implications in the Heterogeneous EraACM Computing Surveys10.1145/349833755:3(1-40)Online publication date: 3-Feb-2022
    • (2022)Security methods for AI based COVID-19 analysis system : A surveyICT Express10.1016/j.icte.2022.03.0028:4(555-562)Online publication date: Dec-2022
    • (2022)Adversarial attacks and defenses on AI in medical imaging informatics: A surveyExpert Systems with Applications10.1016/j.eswa.2022.116815198(116815)Online publication date: Jul-2022
    • (2021)Physical Side-Channel Attacks on Embedded Neural Networks: A SurveyApplied Sciences10.3390/app1115679011:15(6790)Online publication date: 23-Jul-2021
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media