Skip to main content

Using Model Optimization as Countermeasure against Model Recovery Attacks

  • Conference paper
  • First Online:
Applied Cryptography and Network Security Workshops (ACNS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13907))

Included in the following conference series:

  • 372 Accesses

Abstract

Machine learning (ML) and Deep learning (DL) have been widely studied and adopted for different applications across various fields. There is a growing demand for ML implementations as well as ML accelerators for small devices for Internet-of-Things (IoT) applications. Often, these accelerators allow efficient edge-based inference based on pre-trained deep neural network models for IoT setting. First, the model will be trained separately on a more powerful machine and then deployed on the edge device for inference. However, there are several attacks reported that could recover and steal the pre-trained model. For example, recently an attack was reported on edge-based machine learning accelerator demonstrated recovery of target neural network models (architecture and weights) using cold-boot attack. Using this information, the adversary can reconstruct the model, albeit with certain errors due to the corruption of the data during the recovery process. Hence, this indicate potential vulnerability of implementation of ML/DL model on edge devices for IoT applications. In this work, we investigate generic countermeasures for model recovery attacks, based on neural network (NN) model optimization technique, such as quantization, binarization, pruning, etc. We first study and investigate the performance improvement offered and how these transformations could help in mitigating the model recovery process. Our experimental results show that model optimization methods, in addition to achieving better performance, can result in accuracy degradation which help to mitigate model recovery attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kocher, P., Jaffe, J., Jun, B.: Differential power analysis. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 388–397. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48405-1_25

    Chapter  Google Scholar 

  2. Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: 25th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 16), pp. 601–618 (2016)

    Google Scholar 

  3. Batina, L., Bhasin, S., Jap, D., Picek, S.: \(\{\)CSI\(\}\)\(\{\)NN\(\}\): reverse engineering of neural network architectures through electromagnetic side channel. In: 28th USENIX Security Symposium (USENIX Security 19), pp. 515–532 (2019)

    Google Scholar 

  4. Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey. CoRR abs/1810.00069 (2018). http://arxiv.org/abs/1810.00069

  5. Won, Y.S., Chatterjee, S., Jap, D., Basu, A., Bhasin, S.: DeepFreeze: cold boot attacks and high fidelity model recovery on commercial EdgeML device. In: 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD), pp. 1–9. IEEE (2021)

    Google Scholar 

  6. Neural Compute Stick 2. https://software.intel.com/content/www/us/en/develop/hardware/neural-compute-stick.html

  7. Halderman, J.A., et al.: Lest we remember: cold-boot attacks on encryption keys. Commun. ACM 52(5), 91–98 (2009)

    Article  Google Scholar 

  8. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

  9. Jagielski, M., Carlini, N., Berthelot, D., Kurakin, A., Papernot, N.: High accuracy and high fidelity extraction of neural networks. In: 29th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 20), pp. 1345–1362 (2020)

    Google Scholar 

  10. Liu, W., Chang, C.H., Zhang, F., Lou, X.: Imperceptible misclassification attack on deep learning accelerator by glitch injection. In: 2020 57th ACM/IEEE Design Automation Conference (DAC), pp. 1–6. IEEE (2020)

    Google Scholar 

  11. Intel OpenVINO Toolkit. https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html

  12. Won, Y.S., Chatterjee, S., Jap, D., Bhasin, S., Basu, A.: Time to leak: cross-device timing attack on edge deep learning accelerator. In: 2021 International Conference on Electronics, Information, and Communication (ICEIC), pp. 1–4. IEEE (2021)

    Google Scholar 

  13. Kozlov, A., Lazarevich, I., Shamporov, V., Lyalyushkin, N., Gorbachev, Y.: Neural network compression framework for fast model inference. arXiv preprint arXiv:2002.08679 (2020)

  14. Naumov, M., et al.: Deep learning recommendation model for personalization and recommendation systems. arXiv preprint arXiv:1906.00091 (2019)

  15. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: ImageNet classification using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_32

    Chapter  Google Scholar 

  16. Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., Zou, Y.: DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160 (2016)

  17. He, Y., Liu, P., Wang, Z., Hu, Z., Yang, Y.: Filter pruning via geometric median for deep convolutional neural networks acceleration. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019, pp. 4340–4349. Computer Vision Foundation/IEEE (2019). https://doi.org/10.1109/CVPR.2019.00447. http://openaccess.thecvf.com/content_CVPR_2019/html/He_Filter_Pruning_via_Geometric_Median_for_Deep_Convolutional_Neural_Networks_CVPR_2019_paper.html

  18. He, Y., Ding, Y., Liu, P., Zhu, L., Zhang, H., Yang, Y.: Learning filter pruning criteria for deep convolutional neural networks acceleration. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, 13–19 June 2020, pp. 2006–2015. Computer Vision Foundation/IEEE (2020). https://doi.org/10.1109/CVPR42600.2020.00208. https://openaccess.thecvf.com/content_CVPR_2020/html/He_Learning_Filter_Pruning_Criteria_for_Deep_Convolutional_Neural_Networks_Acceleration_CVPR_2020_paper.html

  19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  20. Hong, S., Frigo, P., Kaya, Y., Giuffrida, C., Dumitraş, T.: Terminal brain damage: exposing the graceless degradation in deep neural networks under hardware fault attacks. In: 28th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 19), pp. 497–514 (2019)

    Google Scholar 

  21. Wang, K., Liu, Z., Lin, Y., Lin, J., Han, S.: HAQ: hardware-aware automated quantization with mixed precision. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019, pp. 8612–8620. Computer Vision Foundation/IEEE (2019). https://doi.org/10.1109/CVPR.2019.00881. http://openaccess.thecvf.com/content_CVPR_2019/html/Wang_HAQ_Hardware-Aware_Automated_Quantization_With_Mixed_Precision_CVPR_2019_paper.html

  22. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks (2016). https://doi.org/10.48550/ARXIV.1608.06993, https://arxiv.org/abs/1608.06993

  23. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015). https://doi.org/10.48550/ARXIV.1512.03385, https://arxiv.org/abs/1512.03385

  24. Szegedy, C., et al.: Going deeper with convolutions (2014). https://doi.org/10.48550/ARXIV.1409.4842, https://arxiv.org/abs/1409.4842

Download references

Acknowledgment

This research is supported by the National Research Foundation, Singapore, and Cyber Security Agency of Singapore under its National Cybersecurity Research & Development Programme (Cyber-Hardware Forensic & Assurance Evaluation R &D Programme ). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the view of National Research Foundation, Singapore and Cyber Security Agency of Singapore.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dirmanto Jap .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jap, D., Bhasin, S. (2023). Using Model Optimization as Countermeasure against Model Recovery Attacks. In: Zhou, J., et al. Applied Cryptography and Network Security Workshops. ACNS 2023. Lecture Notes in Computer Science, vol 13907. Springer, Cham. https://doi.org/10.1007/978-3-031-41181-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-41181-6_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-41180-9

  • Online ISBN: 978-3-031-41181-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics