Skip to main content
Log in

Int-Monitor: a model triggered hardware trojan in deep learning accelerators

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Deep learning accelerators have domain-specific architectures, this special memory hierarchy and working mode could bring about new crucial security vulnerabilities. Neural network reuse PE resources layer by layer, after a layer finished, accelerator will give an interrupt to inform host processor dispatch the next layer. By snooping on the interrupt signal patterns, we can capture specific deep neural network (DNN) models and launch hardware trojan attacks. In this paper, we propose Int-Monitor, a novel neural network model triggered hardware trojan in DNN accelerators. By implanting a well-designed interrupt monitor between the host processor and DNN accelerator, this backdoor can capture specific DNN models and trigger the trojan to attack DNN bias buffers. By attacking the global bias buffer, this trojan can prevent the activation of neurons in a DNN model. As result, the network forward propagation will be invalid and the accelerator will deny service. Runtime experiments on LeNet, Resnet, YOLOv2 and YOLOv4tiny DNN models show Int-Monitor can successfully attack the FPGA-based DNN accelerator SoCs. RTL synthesis and implementation show this trojan takes only small hardware overhead and negligible power consumption. It brings 0.5%, 0.2% hardware overhead and 0.622%, 0.187% power consumption on average in the SIMD and NVDLA accelerators. Unlike previous trojan using specially dedicated input data as a trigger, this novel trojan can enable hackers utilize DNN model as a trigger. This mechanism can make its escape from data pre-processing and data encryption.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availibility statement

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  1. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444. https://doi.org/10.1038/nature14539

    Article  Google Scholar 

  2. Keith D F (2017) A brief history of deep learning. https://www.dataversity.net/brief-history-deep-learning/

  3. Zou Z, Shi Z, Guo Y, Ye J (2019) Object detection in 20 Years: a survey. https://arxiv.org/abs/1905.05055

  4. Ham T J, Jung S J, Kim S, Oh Y H, Park Y, Song Y, Park J H, Lee S, Park K, Lee J W, Jeong D (2020) A3: accelerating attention mechanisms in neural networks with approximation. In 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA2020) pp 328–341

  5. Capra M, Bussolino B, Marchisio A, Masera G, Martina MShafique M, (2020) Hardware and software optimizations for accelerating deep neural networks: survey of current trends, challenges, and the road ahead. IEEE Access 8:225134–225180

    Article  Google Scholar 

  6. Wu YX, Liang K, Liu Y, Cui HM (2018) The progress and trends of FPGA-based accelerators in deep learning. Chin J Comput 41(118):1–21

    Google Scholar 

  7. Liu L, Li Z, Lu Y, Deng Y, Han J, Yin S, Wei S (2020) A survey of coarse-grained reconfigurable architecture and design: taxonomy, challenges, and applications. ACM Comput Surv 52(6):1–39

    Article  Google Scholar 

  8. Emer J, Sze V, Chen Y, Yang T (2020) Tutorial on hardware accelerators for deep neural networks. http://eyeriss.mit.edu/tutorial.html

  9. Mittal S, Gupta H, Srivastava S (2021) A survey on hardware security of DNN models and accelerators. J Syst Archit, vol 117, pp 102–163. https://www.sciencedirect.com/science/article/pii/S1383762121001168

  10. Real M, Salvador R (2021) Physical side-channel attacks on embedded neural networks: a survey. Appl Sci 11(15):67–90

    Google Scholar 

  11. Bhunia S, Tehranipoor MM (2018) The hardware trojan war. Attacks Myths Def. https://doi.org/10.1007/978-3-319-68511-3

    Article  Google Scholar 

  12. Zhao JF, Shi G (2017) A survey on the studies of hardware trojan. J Cyber Secur 2(1):74–90

    MathSciNet  Google Scholar 

  13. Rakesh C (2015) Hardware trojan detection in third party digital IP cores. http://ethesis.nitrkl.ac.in/7741/1/604.pdf

  14. Bhasin S, Danger J, Guilley S, Ngo X T, Sauvage L (2013) Hardware trojan horses in cryptographic IP cores. In: Workshop on Fault Diagnosis and Tolerance in Cryptography, pp 15–29

  15. Chen X, Liu Q, Yao S, Wang J, Xu Q, Wang Y, Liu Y, Yang HZ (2018) Hardware trojan detection in third-party digital intellectual property cores by multilevel feature analysis. IEEE Trans Comput-Aided Design Integr Circuits Syst 37(7):1370–1383

    Article  Google Scholar 

  16. Hu X, Zhao Y, Deng L, Liang L, Zuo PF, Ye J, Lin Y, Xie Y (2020) Practical attacks on deep neural networks by memory trojaning. IEEE Trans Comput-Aided Design Integrated Circuits Syst 40(6):1230–1243

    Article  Google Scholar 

  17. Zhao Y, Hu X, Li S C, Ye J, Deng L, Ji Y, Xu J Y, Wu D, Xie Y (2019) Memory Trojan attack on neural network accelerator. In 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp 1415–1420

  18. Trippel T, Shin K, Bush K, Hicks M (2021) Bomberman: defining and defeating hardware ticking timebombs at design-time. In: 2021 IEEE Symposium on Security and Privacy (SP), pp 970–986

  19. Liu Z Z, Ye J, Hu X, Li H, Li X, Hu Y (2020) Sequence triggered hardware trojan in neural network accelerator. In IEEE 38th VLSI Test Symposium (VTS), pp 1–6

  20. Li H, Liu Q, Zhang J L (2016) A survey of hardware trojan threat and defense. Integration, 55, pp 426–437. https://www.sciencedirect.com/science/article/pii/S0167926016000067

  21. Wang Y, Tang H, Xie Y (2021) An in-memory computing architecture based on two-dimensional semiconductors for multiply-accumulate operations. Nat Commun 12:33–47

    Google Scholar 

  22. NVIDIA (2018) Hardware architectural specification. http://nvdla.org/hw/v1/hwarch.html

  23. Shan L, Zhang M, Deng L, GongG (2016) A dynamic multi-precision fixed-point data quantization strategy for convolutional neural network. In: Computer Engineering and Technology, pp 102–111

  24. Lin D, Talathi S, Sreekanth V (2016) Fixed point quantization of deep convolutional networks. In: Proceedings of The 33rd International Conference on Machine Learning (PMLR) 48, pp 2849–2858

  25. Qiu J, Wang J, Yao S, Guo K, Li B, Zhou E, Yu J, Tang T, Xu N, Song S, Wang Y, Yang H (2016) Going deeper with embedded FPGA platform for convolutional neural network. In: FPGA’16, 10, pp 26–35

  26. Cong J, Fang Z,Lo M, Wang H, Xu J, Zhang S (2018) Understanding performance differences of FPGAs and GPUs. In: FCCM, pp 93–96

  27. Wang X B, Hou R, Zhu Y Z, Meng D, Zhang J (2019) NPUFort: a secure architecture of DNN accelerator against model inversion attack. In: CF2019, pp 190–196

  28. Wang X B, Hou R, Zhao B Y, Yuan F K, Zhang J, Meng D, Qian X H (2020) DNNGuard: an elastic heterogeneous DNN accelerator architecture against adversarial attacks. In: ASPLOS2020, pp 19–34

  29. Gupta Aman (2018) Hardware trojan attack and defense techniques. Creative Compon. https://lib.dr.iastate.edu/creativecomponents/391

  30. Clements J, Lao Y (2018) Hardware trojan attacks on neural networks. https://arxiv.org/pdf/1806.05768.pdf

  31. Ye J, Hu Y, Li X W (2018) Hardware trojan in CNN FPGA accelerator. In: Proceedings of the 26th Asia and South Pacific Design Automation Conference, pp 68–73

  32. [Online]. https://maestro.ece.gatech.edu/

  33. Chen C, SIMD accelerator. https://github.com/dhm2013724/yolov2_xilinx_fpga

  34. NVDLA, [Online]. https://github.com/nvdla/

  35. NVDLA primer, [Online]. http://nvdla.org/primer.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peng Li.

Ethics declarations

Conflict of interest

The authors have declared that they have no conficts of interest that are relevant to the content of this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, P., Hou, R. Int-Monitor: a model triggered hardware trojan in deep learning accelerators. J Supercomput 79, 3095–3111 (2023). https://doi.org/10.1007/s11227-022-04759-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-022-04759-y

Keywords

Navigation