Abstract
In the security of machine learning, it is important to protect the confidentiality and integrity of trained models. The leakage of the data in a trained model leads to not only the infringement of intellectual property but also various attacks such as adversarial example attacks. Recent works have proposed several methods using trusted execution environments (TEEs), which provide an isolated environment that cannot be manipulated by malicious software, to protect the trained models. However, the protection using TEEs generally does not work well by simply porting the program of a trained model to a TEE. There are mainly several reasons: the limitation of the memory size in TEEs prevents loading a large number of parameters into memory, which is particularly a characteristic of deep learning; the increase in runtime due to TEEs; and the threat of parameter manipulation (the integrity of the trained model). This paper proposes a novel method based on TEEs to protect the trained models, mainly for deep learning on embedded devices. The proposed method is characterized by memory saving, low runtime overhead, and detection of parameter manipulation. In the experiments, it is implemented and evaluated using Arm TrustZone and OP-TEE.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv:1312.6199 (2014)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 1322–1333 (2015)
Isakov, M., Gadepally, V., Gettings, K., Kinsy, M.: Survey of attacks and defenses on edge-deployed neural networks. In: IEEE High Performance Extreme Computing Conference (HPEC), pp. 1–8 (2019)
Ohrimenko, O., et al.: Oblivious multi-party machine learning on trusted processors. In: Proceedings of the 25th USENIX Security Symposium, pp. 619–636 (2016)
Tramer, F., Boneh, D.: Slalom: fast, verifiable and private execution of neural networks in trusted hardware. In: International Conference on Learning Representations (ICML) (2019)
Hanzlik, L., et al.: MLCapsule: guarded offline deployment of machine learning as a service. arXiv:1808.00590 (2018)
Schlögl, A., Böhme, R.: eNNclave: offline inference with model confidentiality. In: Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security (AISec), pp. 93–104 (2020)
Bayerl, S., et al..: Offline model guard: secure and private ML on mobile devices. In: Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 460–465 (2020)
Mo, F., et al.: DarkneTZ: towards model privacy at the edge using trusted execution environments. In: Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services (MobiSys), pp. 161–174 (2020)
VanNostrand, P., Kyriazis, I., Cheng, M., Guo, T., Walls, R.: Confidential deep learning: executing proprietary models on untrusted devices. arXiv:1908.10730 (2019)
Amacher, J., Schiavoni, V.: On the performance of ARM TrustZone (practical experience report). In: IFIP International Conference on Distributed Applications and Interoperable Systems (DAIS), pp. 133–151 (2019)
Zhao, S., Zhang, Q., Qin, Y., Feng, W., Feng, D.: Minimal kernel: an operating system architecture for TEE to resist board level physical attacks. In: Proceedings of the 22nd International Symposium on Research in Attacks, Intrusions and Defenses (RAID) (2019)
Linaro OP-TEE. https://www.op-tee.org
Darknet: Open Source Neural Networks in C. https://pjreddie.com/darknet
LeCun, Y., Haffner, P., Bottou, L., Bengio, Y.: Object recognition with gradient-based learning. In: Shape, Contour and Grouping in Computer Vision. LNCS, vol. 1681, pp. 319–345. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-46805-6_19
LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y
Mbed TLS. https://tls.mbed.org
Google Trusty TEE. https://source.android.google.cn/security/trusty
NVIDIA Jetson Linux Developer Guide 32.5 Release. https://docs.nvidia.com/jetson/l4t/index.html
Karan, G., Shruti, T., Shweta, S., Ranjita, B., Ramachandran, R.: Privado: practical and secure DNN inference with enclaves. arXiv:1810.00602 (2018)
Dowlin, N., Gilad-Bachrach, R., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: CryptoNets: applying neural networks to encrypted data with high throughput and accuracy. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning (ICML), vol. 48, pp. 201–210 (2016)
Mohassel, P., Zhang, Y.: SecureML: a system for scalable privacy-preserving machine learning. In: IEEE Symposium on Security and Privacy (S&P), pp. 19–38 (2017)
Adi, Y., Baum, C., Cisse, M., Pinkas, B., Keshet, J.: Turning your weakness into a strength: watermarking deep neural networks by backdooring. In: Proceedings of the 27th USENIX Security Symposium, pp. 1615–1631 (2018)
Dubey, A., Cammarota, R., Aysu, A.: MaskedNet: the first hardware inference engine aiming power side-channel protection. arXiv:1910.13063 (2019)
Tramèr, F., Zhang, F., Juels, A., Reiter, M., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: Proceedings of the 25th USENIX Conference on Security Symposium, pp. 601–618 (2016)
Huawei Technologies Co., Ltd.: Thinking ahead about AI security and privacy protection. In: Protecting Personal Data & Advancing Technology Capabilities (2019)
ETSI GR SAI 004: GROUP REPORT V1.1.1 Securing Artificial Intelligence (SAI); Problem Statement (2020)
Acknowledgments
This work was supported by JST-Mirai Program Grant Number JPMJMI19B6, Japan.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Nakai, T., Suzuki, D., Fujino, T. (2021). Towards Trained Model Confidentiality and Integrity Using Trusted Execution Environments. In: Zhou, J., et al. Applied Cryptography and Network Security Workshops. ACNS 2021. Lecture Notes in Computer Science(), vol 12809. Springer, Cham. https://doi.org/10.1007/978-3-030-81645-2_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-81645-2_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81644-5
Online ISBN: 978-3-030-81645-2
eBook Packages: Computer ScienceComputer Science (R0)