Abstract
An artificial intelligence (AI) accelerator is a specialized hardware accelerator designed to accelerate machine learning applications. The machine learning applications may require an isolated execution for the confidentiality of model information and processing data and the integrity of the application tasks. For example, when critical applications such as biometrics use machine learning, the applications are required to execute in a trusted environment isolated not to be compromised by the other applications. The isolated execution of a machine learning application using an AI accelerator is often achieved with a proprietary hardware architecture consisting of dedicated security circuits for the accelerator. On the other hand, several previous works have proposed using open-source or general-purpose security functions for the isolation execution to reduce design costs and commonly apply to various accelerators. This paper proposes an isolated execution method of AI accelerators using OP-TEE, an open-source Trusted Execution Environment (TEE) implementing the Arm TrustZone technology. The contribution is to analyze the security threats of AI accelerators, propose the countermeasure based on OP-TEE, and evaluate the implementation of the isolated execution.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)
Isakov, M., Gadepally, V., Gettings, K.M., Kinsy, M.A.: Survey of attacks and defenses on edge-deployed neural networks. In: 2019 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1–8. IEEE (2019)
Nakai, T., Suzuki, D., Fujino, T.: Towards trained model confidentiality and integrity using trusted execution environments. In: Zhou, J., et al. (eds.) ACNS 2021. LNCS, vol. 12809, pp. 151–168. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81645-2_10
ETSI GR SAI 004.: GROUP REPORT V1.1.1 Securing Artificial Intelligence (SAI); Problem Statement (2020). https://www.etsi.org/deliver/etsi_gr/SAI/001_099/004/01.01.01_60/gr_SAI004v010101p.pdf
Apple Secure Enclave. https://support.apple.com/ja-jp/guide/security/sec59b0b31ff/1/web/1
Xilinx CHaiDNN-v2. https://github.com/Xilinx/CHaiDNN
Hua, W., Umar, M., Zhang, Z., Edward Suh, G.: GuardNN: secure DNN accelerator for privacy-preserving deep learning. arXiv preprint arXiv:2008.11632 (2020)
Moreau, T., et al.: A hardware-software blueprint for flexible deep learning specialization. IEEE Micro 39(5), 8–16 (2019)
Xie, P., Ren, X., Sun, G.: Customizing trusted AI accelerators for efficient privacy-preserving machine learning. arXiv preprint arXiv:2011.06376 (2020)
Linaro OP-TEE. https://www.op-tee.org
NVIDIA Deep Learning Accelerator. https://nvdla.org
Isolation Design Example for the Zynq UltraScale+ MPSoC. https://japan.xilinx.com/support/documentation/application_notes/xapp1336-isolation-design-flow-example-mpsoc.pdf
Jouppi, N.P., et al.: In-datacenter performance analysis of a tensor processing unit. In: Proceedings of the 44th Annual International Symposium on Computer Architecture, pp. 1–12 (2017)
Park, H., Lin, F.X.: Safe and practical GPU acceleration in trustzone. arXiv preprint arXiv:2111.03065 (2021)
Hashemi, H., Wang, Y., Annavaram, M.: Privacy and integrity preserving training using trusted hardware. CoRR, abs/2105.00334 (2021)
NVIDIA H100 Tensor Core GPU Architecture, Exceptional Performance, Scalability, and Security for the Data Center. https://www.nvidia.com/en-us/data-center/solutions/confidential-computing/
Benhani, E.M., Bossuet, L., Aubert, A.: The security of ARM TrustZone in a FPGA-based SoC. IEEE Trans. Comput. 68(8), 1238–1248 (2019)
Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
Zynq UltraScale+ MPSoC ZCU102. https://japan.xilinx.com/products/boards-and-kits/ek-u1-zcu102-g.html
LeCun, Y., Haffner, P., Bottou, L., Bengio, Y.: Object recognition with gradient-based learning. In: Shape, Contour and Grouping in Computer Vision. LNCS, vol. 1681, pp. 319–345. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-46805-6_19
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
Xu, Q., Arafin, Md.T., Qu, G.: Security of neural networks from hardware perspective: a survey and beyond. In: 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 449–454 (2021)
Wang, X., Hou, R., Zhu, Y., Zhang, J., Meng, D.: NPUFort: a secure architecture of DNN accelerator against model inversion attack. In: Proceedings of the 16th ACM International Conference on Computing Frontiers, pp. 190–196 (2019)
Gross, M., Jacob, N., Zankl, A., Sigl, G.: Breaking TrustZone memory isolation through malicious hardware on a modern FPGA-SoC. In: Proceedings of the 3rd ACM Workshop on Attacks and Solutions in Hardware Security Workshop, pp. 3–12 (2019)
Stajnrod, R., Yehuda, R.B., Zaidenberg, N.J.: Attacking TrustZone on devices lacking memory protection. J. Comput. Virol. Hacking Tech. 1–11 (2021)
Acknowledgments
This work was supported by JST-Mirai Program Grant Number JPMJMI19B6, Japan.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Nakai, T., Suzuki, D., Fujino, T. (2022). Towards Isolated AI Accelerators with OP-TEE on SoC-FPGAs. In: Zhou, J., et al. Applied Cryptography and Network Security Workshops. ACNS 2022. Lecture Notes in Computer Science, vol 13285. Springer, Cham. https://doi.org/10.1007/978-3-031-16815-4_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-16815-4_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16814-7
Online ISBN: 978-3-031-16815-4
eBook Packages: Computer ScienceComputer Science (R0)