skip to main content
10.1145/3400302.3415726acmconferencesArticle/Chapter ViewAbstractPublication PagesiccadConference Proceedingsconference-collections
research-article
Public Access

Concurrent weight encoding-based detection for bit-flip attack on neural network accelerators

Published: 17 December 2020 Publication History

Abstract

The recent revealed Bit-Flip Attack (BFA) against deep neural networks (DNNs) is highly concerning, as it can completely mislead the inference of quantized DNNs by only flipping a few weight bits in hardware memories through manners like DRAM rowhammer. A key question before applying any BFA mitigation solutions, such as retraining or model reloading, is how to quickly and accurately detect such an attack without impacting the normal inference. In this paper, we propose a weight encoding-based framework to concurrently detect BFA by leveraging the spatial locality of bit flipping in BFA and a fast encoding of sensitive weights only. Extensive experimental results show that our method can accurately differentiate the malicious fault models under BFA and the random bit flipping that could also occur in weight memories but does not impact accuracy as that of BFA, with very low overhead across various DNNs on both CIFAR-10 and ImageNet datasets. To the best of our knowledge, this is the first real-time detection framework for BFA attack against quantized DNNs that are widely deployed in hardware accelerators.

References

[1]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). IEEE, 39--57.
[2]
Lucian Cojocar, Kaveh Razavi, Cristiano Giuffrida, and Herbert Bos. 2019. Exploiting correcting codes: On the effectiveness of ecc memory against rowhammer attacks. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 55--71.
[3]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[4]
Daniel Gruss, Moritz Lipp, Michael Schwarz, Daniel Genkin, Jonas Juffinger, Sioli O'Connell, Wolfgang Schoechl, and Yuval Yarom. 2018. Another flip in the wall of rowhammer defenses. In 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 245--261.
[5]
Muhammad Abdullah Hanif, Faiq Khalid, Rachmad Vidya Wicaksana Putra, Semeen Rehman, and Muhammad Shafique. 2018. Robust machine learning systems: Reliability and security for deep neural networks. In 2018 IEEE 24th International Symposium on On-Line Testing And Robust System Design (IOLTS). IEEE, 257--260.
[6]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[7]
Zhezhi He, Adnan Siraj Rakin, Jingtao Li, Chaitali Chakrabarti, and Deliang Fan. [n.d.]. Defending and Harnessing the Bit-Flip based Adversarial Weight Attack. ([n. d.]).
[8]
Sanghyun Hong, Pietro Frigo, Yiğitcan Kaya, Cristiano Giuffrida, and Tudor Dumitras. 2019. Terminal brain damage: Exposing the graceless degradation in deep neural networks under hardware fault attacks. In 28th {USENIX} Security Symposium ({USENIX} Security 19). 497--514.
[9]
Xiaolu Hou, Jakub Breier, Dirmanto Jap, Lei Ma, Shivam Bhasin, and Yang Liu. 2019. Experimental Evaluation of Deep Neural Network Resistance Against Fault Injection Attacks. IACR Cryptology ePrint Archive 2019 (2019), 461.
[10]
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017).
[11]
Yan Xiong Liangliang Chang Zhezhi He Deliang Fan Jingtao Li, Adnan Siraj Rakin and Chaitali Chakrabarti. 2020. Defending Bit-Flip Attack through DNN Weight Reconstruction. In 57th Design Automation Conference (DAC). IEEE.
[12]
Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. 2014. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors. ACM SIGARCH Computer Architecture News 42, 3 (2014), 361--372.
[13]
Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. 2010. Cifar-10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/kriz/cifar.html 8 (2010).
[14]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.
[15]
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2017. Trojaning attack on neural networks. (2017).
[16]
Yannan Liu, Lingxiao Wei, Bo Luo, and Qiang Xu. 2017. Fault injection attack on deep neural network. In 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 131--138.
[17]
Adnan Siraj Rakin, Zhezhi He, and Deliang Fan. 2019. Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search. In Proceedings of the IEEE International Conference on Computer Vision. 1211--1220.
[18]
Mark Seaborn and Thomas Dullien. 2015. Exploiting the DRAM rowhammer bug to gain kernel privileges. Black Hat 15 (2015), 71.

Cited By

View all
  • (2025)Exploiting neural networks bit-level redundancy to mitigate the impact of faults at inferenceThe Journal of Supercomputing10.1007/s11227-024-06693-781:1Online publication date: 1-Jan-2025
  • (2024)Explainability to the Rescue: A Pattern-Based Approach for Detecting Adversarial Attacks2024 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)10.1109/HOST55342.2024.10545413(160-170)Online publication date: 6-May-2024
  • (2024)ALERT: A lightweight defense mechanism for enhancing DNN robustness against T-BFAJournal of Systems Architecture10.1016/j.sysarc.2024.103160152(103160)Online publication date: Jul-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICCAD '20: Proceedings of the 39th International Conference on Computer-Aided Design
November 2020
1396 pages
ISBN:9781450380263
DOI:10.1145/3400302
  • General Chair:
  • Yuan Xie
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

  • IEEE CAS
  • IEEE CEDA
  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 December 2020

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Funding Sources

  • NSF

Conference

ICCAD '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 457 of 1,762 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)79
  • Downloads (Last 6 weeks)16
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Exploiting neural networks bit-level redundancy to mitigate the impact of faults at inferenceThe Journal of Supercomputing10.1007/s11227-024-06693-781:1Online publication date: 1-Jan-2025
  • (2024)Explainability to the Rescue: A Pattern-Based Approach for Detecting Adversarial Attacks2024 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)10.1109/HOST55342.2024.10545413(160-170)Online publication date: 6-May-2024
  • (2024)ALERT: A lightweight defense mechanism for enhancing DNN robustness against T-BFAJournal of Systems Architecture10.1016/j.sysarc.2024.103160152(103160)Online publication date: Jul-2024
  • (2024)What Makes Vision Transformers Robust Towards Bit-Flip Attack?Pattern Recognition10.1007/978-3-031-78186-5_28(424-438)Online publication date: 30-Nov-2024
  • (2023)A Survey of Bit-Flip Attacks on Deep Neural Network and Corresponding Defense MethodsElectronics10.3390/electronics1204085312:4(853)Online publication date: 8-Feb-2023
  • (2023)Systemization of Knowledge: Robust Deep Learning using Hardware-software co-design in Centralized and Federated SettingsACM Transactions on Design Automation of Electronic Systems10.1145/361686828:6(1-32)Online publication date: 23-Aug-2023
  • (2023)Structural Coding: A Low-Cost Scheme to Protect CNNs from Large-Granularity Memory FaultsProceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis10.1145/3581784.3607084(1-17)Online publication date: 12-Nov-2023
  • (2023)Bottlenecks in Secure Adoption of Deep Neural Networks in Safety-Critical Applications2023 IEEE 66th International Midwest Symposium on Circuits and Systems (MWSCAS)10.1109/MWSCAS57524.2023.10406035(801-805)Online publication date: 6-Aug-2023
  • (2023)Active and Passive Physical Attacks on Neural Network AcceleratorsIEEE Design & Test10.1109/MDAT.2023.325360340:5(70-85)Online publication date: Oct-2023
  • (2023)Don't Knock! Rowhammer at the Backdoor of DNN Models2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)10.1109/DSN58367.2023.00023(109-122)Online publication date: Jun-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media