Skip to main content

Towards Trained Model Confidentiality and Integrity Using Trusted Execution Environments

  • Conference paper
  • First Online:
Applied Cryptography and Network Security Workshops (ACNS 2021)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12809))

Included in the following conference series:

Abstract

In the security of machine learning, it is important to protect the confidentiality and integrity of trained models. The leakage of the data in a trained model leads to not only the infringement of intellectual property but also various attacks such as adversarial example attacks. Recent works have proposed several methods using trusted execution environments (TEEs), which provide an isolated environment that cannot be manipulated by malicious software, to protect the trained models. However, the protection using TEEs generally does not work well by simply porting the program of a trained model to a TEE. There are mainly several reasons: the limitation of the memory size in TEEs prevents loading a large number of parameters into memory, which is particularly a characteristic of deep learning; the increase in runtime due to TEEs; and the threat of parameter manipulation (the integrity of the trained model). This paper proposes a novel method based on TEEs to protect the trained models, mainly for deep learning on embedded devices. The proposed method is characterized by memory saving, low runtime overhead, and detection of parameter manipulation. In the experiments, it is implemented and evaluated using Arm TrustZone and OP-TEE.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv:1312.6199 (2014)

  2. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 1322–1333 (2015)

    Google Scholar 

  3. Isakov, M., Gadepally, V., Gettings, K., Kinsy, M.: Survey of attacks and defenses on edge-deployed neural networks. In: IEEE High Performance Extreme Computing Conference (HPEC), pp. 1–8 (2019)

    Google Scholar 

  4. Ohrimenko, O., et al.: Oblivious multi-party machine learning on trusted processors. In: Proceedings of the 25th USENIX Security Symposium, pp. 619–636 (2016)

    Google Scholar 

  5. Tramer, F., Boneh, D.: Slalom: fast, verifiable and private execution of neural networks in trusted hardware. In: International Conference on Learning Representations (ICML) (2019)

    Google Scholar 

  6. Hanzlik, L., et al.: MLCapsule: guarded offline deployment of machine learning as a service. arXiv:1808.00590 (2018)

  7. Schlögl, A., Böhme, R.: eNNclave: offline inference with model confidentiality. In: Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security (AISec), pp. 93–104 (2020)

    Google Scholar 

  8. Bayerl, S., et al..: Offline model guard: secure and private ML on mobile devices. In: Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 460–465 (2020)

    Google Scholar 

  9. Mo, F., et al.: DarkneTZ: towards model privacy at the edge using trusted execution environments. In: Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services (MobiSys), pp. 161–174 (2020)

    Google Scholar 

  10. VanNostrand, P., Kyriazis, I., Cheng, M., Guo, T., Walls, R.: Confidential deep learning: executing proprietary models on untrusted devices. arXiv:1908.10730 (2019)

  11. Amacher, J., Schiavoni, V.: On the performance of ARM TrustZone (practical experience report). In: IFIP International Conference on Distributed Applications and Interoperable Systems (DAIS), pp. 133–151 (2019)

    Google Scholar 

  12. Zhao, S., Zhang, Q., Qin, Y., Feng, W., Feng, D.: Minimal kernel: an operating system architecture for TEE to resist board level physical attacks. In: Proceedings of the 22nd International Symposium on Research in Attacks, Intrusions and Defenses (RAID) (2019)

    Google Scholar 

  13. Linaro OP-TEE. https://www.op-tee.org

  14. Darknet: Open Source Neural Networks in C. https://pjreddie.com/darknet

  15. LeCun, Y., Haffner, P., Bottou, L., Bengio, Y.: Object recognition with gradient-based learning. In: Shape, Contour and Grouping in Computer Vision. LNCS, vol. 1681, pp. 319–345. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-46805-6_19

    Chapter  Google Scholar 

  16. LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010)

    Google Scholar 

  17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)

  18. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  19. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  20. Mbed TLS. https://tls.mbed.org

  21. Google Trusty TEE. https://source.android.google.cn/security/trusty

  22. NVIDIA Jetson Linux Developer Guide 32.5 Release. https://docs.nvidia.com/jetson/l4t/index.html

  23. Karan, G., Shruti, T., Shweta, S., Ranjita, B., Ramachandran, R.: Privado: practical and secure DNN inference with enclaves. arXiv:1810.00602 (2018)

  24. Dowlin, N., Gilad-Bachrach, R., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: CryptoNets: applying neural networks to encrypted data with high throughput and accuracy. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning (ICML), vol. 48, pp. 201–210 (2016)

    Google Scholar 

  25. Mohassel, P., Zhang, Y.: SecureML: a system for scalable privacy-preserving machine learning. In: IEEE Symposium on Security and Privacy (S&P), pp. 19–38 (2017)

    Google Scholar 

  26. Adi, Y., Baum, C., Cisse, M., Pinkas, B., Keshet, J.: Turning your weakness into a strength: watermarking deep neural networks by backdooring. In: Proceedings of the 27th USENIX Security Symposium, pp. 1615–1631 (2018)

    Google Scholar 

  27. Dubey, A., Cammarota, R., Aysu, A.: MaskedNet: the first hardware inference engine aiming power side-channel protection. arXiv:1910.13063 (2019)

  28. Tramèr, F., Zhang, F., Juels, A., Reiter, M., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: Proceedings of the 25th USENIX Conference on Security Symposium, pp. 601–618 (2016)

    Google Scholar 

  29. Huawei Technologies Co., Ltd.: Thinking ahead about AI security and privacy protection. In: Protecting Personal Data & Advancing Technology Capabilities (2019)

    Google Scholar 

  30. ETSI GR SAI 004: GROUP REPORT V1.1.1 Securing Artificial Intelligence (SAI); Problem Statement (2020)

    Google Scholar 

Download references

Acknowledgments

This work was supported by JST-Mirai Program Grant Number JPMJMI19B6, Japan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tsunato Nakai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nakai, T., Suzuki, D., Fujino, T. (2021). Towards Trained Model Confidentiality and Integrity Using Trusted Execution Environments. In: Zhou, J., et al. Applied Cryptography and Network Security Workshops. ACNS 2021. Lecture Notes in Computer Science(), vol 12809. Springer, Cham. https://doi.org/10.1007/978-3-030-81645-2_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-81645-2_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-81644-5

  • Online ISBN: 978-3-030-81645-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics