Abstract
Model extraction is a major threat for embedded deep neural network models that leverages an extended attack surface. Indeed, by physically accessing a device, an adversary may exploit side-channel leakages to extract critical information of a model (i.e., its architecture or internal parameters). Different adversarial objectives are possible including a fidelity-based scenario where the architecture and parameters are precisely extracted (model cloning). We focus this work on software implementation of deep neural networks embedded in a high-end 32-bit microcontroller (Cortex-M7) and expose several challenges related to fidelity-based parameters extraction through side-channel analysis, from the basic multiplication operation to the feed-forward connection through the layers. To precisely extract the value of parameters represented in the single-precision floating point IEEE-754 standard, we propose an iterative process that is evaluated with both simulations and traces from a Cortex-M7 target. To our knowledge, this work is the first to target such an high-end 32-bit platform. Importantly, we raise and discuss the remaining challenges for the complete extraction of a deep neural network model, more particularly the critical case of biases.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
More particularly, cache-based attacks that are out of our scope.
- 2.
Due to IEEE-754 encoding, second byte of an encoded value contains the least significant bit of the exponent and the 7 most significant bits of mantissa.
- 3.
The specific features of CNN compared to MLP that should impact the leakage exploitation are not discussed in [1].
- 4.
- 5.
Contrary to Convolutional Neural Network models.
References
Batina, L., Bhasin, S., Jap, D., Picek, S.: CSI NN: reverse engineering of neural network architectures through electromagnetic side channel. In: 28th USENIX Security Symposium (USENIX Security 2019), pp. 515–532 (2019)
Breier, J., Jap, D., Hou, X., Bhasin, S., Liu, Y.: SNIFF: reverse engineering of neural networks with fault attacks. IEEE Trans. Reliab. (2021)
Carlini, N., Jagielski, M., Mironov, I.: Cryptanalytic extraction of neural network models. In: Micciancio, D., Ristenpart, T. (eds.) CRYPTO 2020. LNCS, vol. 12172, pp. 189–218. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56877-1_7
Chabanne, H., Danger, J.L., Guiga, L., Kühne, U.: Side channel attacks for architecture extraction of neural networks. CAAI Trans. Intell. Technol. 6(1), 3–16 (2021)
Dubey, A., Cammarota, R., Aysu, A.: MaskedNet: the first hardware inference engine aiming power side-channel protection. In: 2020 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), pp. 197–208 (2020)
Dumont, M., Moëllic, P.A., Viera, R., Dutertre, J.M., Bernhard, R.: An overview of laser injection against embedded neural network models. In: 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), pp. 616–621. IEEE (2021)
Gongye, C., Fei, Y., Wahl, T.: Reverse-engineering deep neural networks using floating-point timing side-channels. In: 2020 57th ACM/IEEE Design Automation Conference (DAC), pp. 1–6 (2020). ISSN 0738-100X
Hua, W., Zhang, Z., Suh, G.E.: Reverse engineering convolutional neural networks through side-channel information leaks. In: 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), pp. 1–6 (2018)
Jagielski, M., Carlini, N., Berthelot, D., Kurakin, A., Papernot, N.: High accuracy and high fidelity extraction of neural networks. In: 29th USENIX Security Symposium (USENIX Security 2020), pp. 1345–1362 (2020)
Joud, R., Moëllic, P.A., Bernhard, R., Rigaud, J.B.: A review of confidentiality threats against embedded neural network models. In: 2021 IEEE 7th World Forum on Internet of Things (WF-IoT). IEEE (2021)
Maji, S., Banerjee, U., Chandrakasan, A.P.: Leaky nets: recovering embedded neural network models and inputs through simple power and timing side-channels-attacks and defenses. IEEE Internet of Things J. (2021)
Méndez Real, M., Salvador, R.: Physical side-channel attacks on embedded neural networks: a survey. Appl. Sci. 11(15), 6790 (2021)
Oh, S.J., Schiele, B., Fritz, M.: Towards reverse-engineering black-box neural networks. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 121–144. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_7
Xiang, Y., Chen, Z., et al.: Open DNN box by power side-channel attack. IEEE Trans. Circuits Syst. II Express Briefs 67(11), 2717–2721 (2020)
Yu, H., Ma, H., Yang, K., Zhao, Y., Jin, Y.: DeepEM: deep neural networks model recovery through EM side-channel information leakage. In: 2020 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), pp. 209–218. IEEE (2020)
Acknowledgements
This work is supported by (CEA-Leti) the European project InSecTT (ECSEL JU 876038) and by the French ANR in the framework of the Investissements d’avenir program (ANR-10-AIRT-05, irtnanoelec); and (Mines Saint-Etienne) by the French program ANR PICTURE (AAPG2020). This work benefited from the French Jean Zay supercomputer with the AI dynamic access program.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Joud, R., Moëllic, PA., Pontié, S., Rigaud, JB. (2023). A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters. In: Buhan, I., Schneider, T. (eds) Smart Card Research and Advanced Applications. CARDIS 2022. Lecture Notes in Computer Science, vol 13820. Springer, Cham. https://doi.org/10.1007/978-3-031-25319-5_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-25319-5_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25318-8
Online ISBN: 978-3-031-25319-5
eBook Packages: Computer ScienceComputer Science (R0)