ABSTRACT
Hardware security for machine learning (ML) and artificial intelligence (AI) circuits is becoming a major topic within the cybersecurity framework. Although much research is ongoing on this front, the community omits the educational components. In this paper, we present a training module comprised of a set of hands-on experiments that allow teaching hardware security concepts to newcomers. Specifically, we propose 5 experiments and related training material that teach side-channel attacks and defenses on the hardware implementations of neural networks. We report the organization and the findings after testing these experiments with sophomore undergraduate students at North Carolina State University. The students first study the basics of neural networks and then build a neural network inference circuit on a breadboard. They then conduct a differential power analysis attack on the hardware to steal trained weights and a circuit-balancing (hiding) style defense to mitigate the attack. The students develop all related hardware and software codes to perform attacks and build defenses. The results show that such complex notions of digital circuits design, neural networks, and side-channel analysis can be instructed at the sophomore level with a well-thought set of experiments. Future extensions could include establishing an online infrastructure for remote teaching and efficient scaling to a broader audience.
- Lejla Batina, Shivam Bhasin, Dirmanto Jap, and Stjepan Picek. 2019. CSI NN: Reverse engineering of neural network architectures through electromagnetic side channel. In 28th USENIX Security Symposium (USENIX Security 19). 515--532.Google Scholar
- Swarup Bhunia and Mark Tehranipoor. 2018. Hardware Security: A Hands-on Learning Approach.Google Scholar
- Jennifer Callen and Jason E James. 2020. CYBERSECURITY ENGINEERING: THE GROWING NEED. Issues in Information Systems 21, 4 (2020).Google Scholar
- Joseph Clements and Yingjie Lao. 2019. Hardware Trojan Design on Neural Networks. In 2019 IEEE International Symposium on Circuits and Systems. 1--5.Google Scholar
- Matthieu Courbariaux and Yoshua Bengio. 2016. BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. CoRR abs/1602.02830 (2016). arXiv:1602.02830 http://arxiv.org/abs/1602.02830Google Scholar
- Mircea Dabacan. 2018. Analog Discovery 2 Reference Manual. Analog Discovery 2 Reference Manual-Digilent Reference (2018).Google Scholar
- Anuj Dubey, Afzal Ahmad, Muhammad Adeel Pasha, Rosario Cammarota, and Aydin Aysu. 2021. ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking. IACR Transactions on Cryptographic Hardware and Embedded Systems 2022, 1 (Nov. 2021), 506--556.Google Scholar
- Anuj Dubey, Rosario Cammarota, and Aydin Aysu. 2020. MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection. In 2020 IEEE International Symposium on Hardware Oriented Security and Trust, HOST 2020, San Jose, CA, USA, December 7--11, 2020. IEEE, 197--208.Google Scholar
- Louis Goubin and Jacques Patarin. 1999. DES and Differential Power Analysis The "Duplication" Method. In Cryptographic Hardware and Embedded Systems. Springer Berlin Heidelberg, Berlin, Heidelberg, 158--172.Google Scholar
- Xiaolu Hou, Jakub Breier, Dirmanto Jap, Lei Ma, Shivam Bhasin, and Yang Liu. 2020. Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection. In 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA). 1--6.Google Scholar
- Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks. Advances in neural information processing systems 29 (2016).Google Scholar
- Paul Kocher, Joshua Jaffe, and Benjamin Jun. 1999. Differential Power Analysis. In Advances in Cryptology - CRYPTO' 99, Michael Wiener (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 388--397.Google Scholar
- Paul Kocher, Joshua Jaffe, Benjamin Jun, and Pankaj Rohatgi. 2011. Introduction to differential power analysis. Journal of Cryptographic Engineering 1, 1 (01 Apr 2011), 5--27.Google ScholarCross Ref
- Yuntao Liu, Yang Xie, Abhishek Charkraborty, and Ankur Srivastava. 2019. Secure and effective logic locking for machine learning applications. Cryptology ePrint Archive (2019).Google Scholar
- Stefan Mangard, Elisabeth Oswald, and Thomas Popp. 2008. Power analysis attacks: Revealing the secrets of smart cards. Vol. 31. Springer Science & Business Media.Google ScholarDigital Library
- Svetla Nikova, Christian Rechberger, and Vincent Rijmen. 2006. Threshold Implementations Against Side-Channel Attacks and Glitches. In Information and Communications Security, Peng Ning, Sihan Qing, and Ninghui Li (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 529--545.Google Scholar
- Nicolas Sklavos, Ricardo Chaves, Giorgio Di Natale, and Francesco Regazzoni. [n.d.]. Hardware security and trust. ([n. d.]).Google Scholar
- K. Tiri and I. Verbauwhede. 2004. A logic level design methodology for a secure DPA resistant ASIC or FPGA implementation. In Proceedings Design, Automation and Test in Europe Conference and Exhibition, Vol. 1. 246--251 Vol. 1.Google Scholar
- Yoo-Seung Won, Soham Chatterjee, Dirmanto Jap, Arindam Basu, and Shivam Bhasin. 2021. DeepFreeze: Cold Boot Attacks and High Fidelity Model Recovery on Commercial EdgeML Device. In 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). IEEE, 1--9.Google Scholar
- Qian Xu, Md Tanvir Arafin, and Gang Qu. 2021. Security of Neural Networks from Hardware Perspective: A Survey and Beyond. In 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC). 449--454.Google ScholarDigital Library
- Pengyuan Yu and Patrick Schaumont. 2007. Secure FPGA Circuits Using Controlled Placement and Routing. In Proceedings of the 5th IEEE/ACM International Conference on Hardware/Software Codesign and System Synthesis. ACM, New York, NY, USA, 45--50.Google ScholarDigital Library
Recommendations
A Survey on Machine Learning in Hardware Security
Hardware security is currently a very influential domain, where each year countless works are published concerning attacks against hardware and countermeasures. A significant number of them use machine learning, which is proven to be very effective in ...
Security of Neural Networks from Hardware Perspective: A Survey and Beyond
ASPDAC '21: Proceedings of the 26th Asia and South Pacific Design Automation ConferenceRecent advances in neural networks (NNs) and their applications in deep learning techniques have made the security aspects of NNs an important and timely topic for fundamental research. In this paper, we survey the security challenges and opportunities ...
Machine Learning for Hardware Security: Opportunities and Risks
Recently, machine learning algorithms have been utilized by system defenders and attackers to secure and attack hardware, respectively. In this work, we investigate the impact of machine learning on hardware security. We explore the defense and attack ...
Comments