Abstract
In this paper, we present a novel hardware trojan assisted side-channel attack to reverse engineer DNN architectures on edge FPGA accelerators. In particular, our attack targets the widely-used Versatile Tensor Accelerator (VTA). A hardware trojan is employed to track the memory transactions by monitoring the AXI interface signals of VTA’s submodules. The memory side-channel information is leaked through a UART port, which reveals the DNN architecture information. Our experiments demonstrate the effectiveness of the proposed attack and highlight the need for robust security measures to protect DNN intellectual property (IP) models that are deployed on edge FPGA platforms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30, 5998–6008 (2017)
Luo, W., Sun, P., Zhong, F., Liu, W., Zhang, T., Wang, Y.: End-to-end active object tracking and its real-world deployment via reinforcement learning. IEEE Trans. Pattern Anal. Mach. Intell. 42(6), 1317–1332 (2019)
Szegedy, C., et al.: Intriguing properties of neural networks. In: Proceedings of the 2014 Conference on Computer Vision and Pattern Recognition, IEEE (2014)
Shokri, R., et al.: Membership inference attacks against machine learning models. In: IEEE Symposium on Security and Privacy, IEEE (2017)
Moreau, T., et al.: A hardware-software blueprint for flexible deep learning specialization. IEEE Micro 39(5), 8–16 (2019)
Hu, X., et al.: DeepSniffer: a DNN model extraction framework based on learning architectural hints. In: International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS, pp. 385–399 (2020)
Hua, W., Zhang, Z., Suh, G.E.: Reverse engineering convolutional neural networks through side-channel information leaks. In: Proceedings of the 55th Annual Design Automation Conference, pp. 1–6, June 2018
Yoshida, K., Okura, S., Shiozaki, M., Kubota, T., Fujino, T.: Model reverse-engineering attack using correlation power analysis against systolic array based neural network accelerator. In: Proceedings - IEEE International Symposium on Circuits and Systems, pp. 42–46, October 2020
Zhang, Y., Yasaei, R., Chen, H., Li, Z., Faruque, M.A., et al.: Stealing neural network structure through remote FPGA Side-channel analysis. IEEE Trans. Inf. Forensics Secur. 16, 4377–4388 (2021)
Batina, L., Bhasin, S., Jap, D., Picek, S.: CSI neural network: Using side-channels to recover your artificial neural network information. arXiv preprint arXiv:1810.09076 (2018)
Vaishnav, A., Pham, K.D., Koch, D.: A survey on FPGA virtualization. In: 2018 28th International Conference on Field Programmable Logic and Applications (FPL), pp. 131–1317. IEEE, August 2018
Tian, S., Moini, S., Wolnikowski, A., Holcomb, D., Tessier, R., Szefer, J.: Remote power attacks on the versatile tensor accelerator in multi-tenant FPGAs. In: 2021 IEEE 29th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), pp. 242–246, May 2021
Wei, L., Luo, B., Li, Y., Liu, Y., Xu, Q.: I know what you see: power side-channel attack on convolutional neural network accelerators. In: Proceedings of the 34th Annual Computer Security Applications Conference (ACSAC 2018). Association for Computing Machinery, New York, NY, USA, pp. 393–406 (2018)
Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 (2016)
Duncan, A., Rahman, F., Lukefahr, A., Farahmandi, F., Tehranipoor, M.: FPGA bitstream security: a day in the life. In: 2019 IEEE International Test Conference (ITC), pp. 1–10. IEEE, November 2019
Fyrbiak, M., et al.: Hal-the missing piece of the puzzle for hardware reverse engineering, trojan detection and insertion. IEEE Trans. Dependable Secure Comput. 16(3), 498–510 (2018)
Perez, T., Imran, M., Vaz, P., Pagliarini, S.: Side-channel trojan insertion-a practical foundry-side attack via ECO. In: 2021 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5. IEEE, May 2021
Banerjee, S., et al.: A highly configurable hardware/Software stack for DNN inference acceleration. arXiv preprint arXiv:2111.15024 (2021)
Chen, T., Guestrin, C.: Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd ACM sigkdd International Conference on Knowledge Discovery and Data Mining, pp. 785–794, August 2016
Jagielski, M., Carlini, N., Berthelot, D., Kurakin, A., Papernot, N.: High accuracy and high fidelity extraction of neural networks. In: Proceedings of the 29th USENIX Conference on Security Symposium, pp. 1345–1362, August 2020
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Acknowledgement
This work was supported in part by NTU-DESAY SV Research Program 2018–0980; and in part by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2, under Grant MOE-T2EP20121-0008.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chandrasekar, S., Lam, SK., Thambipillai, S. (2023). DNN Model Theft Through Trojan Side-Channel on Edge FPGA Accelerator. In: Palumbo, F., Keramidas, G., Voros, N., Diniz, P.C. (eds) Applied Reconfigurable Computing. Architectures, Tools, and Applications. ARC 2023. Lecture Notes in Computer Science, vol 14251. Springer, Cham. https://doi.org/10.1007/978-3-031-42921-7_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-42921-7_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-42920-0
Online ISBN: 978-3-031-42921-7
eBook Packages: Computer ScienceComputer Science (R0)