Skip to main content

Can Inputs’ Reconstruction Information Be Coded into Machine Learning Model’s Outputs?

  • Conference paper
  • First Online:
Computer Security. ESORICS 2023 International Workshops (ESORICS 2023)

Abstract

There is a growing demand for confidential inference in machine learning services, in which user data privacy is protected in the inference process. In this scenario, model providers can perform privacy attacks by using the output results of models. A previous study inferred only sensitive attributes of user data from the model outputs. In this paper, we present an attack that can reconstruct the input user data of a machine learning model from its outputs. The model provider trains an inference model such that it embeds the reconstruction information for user data into the model outputs while maintaining high inference accuracy. At the same time, the attacker trains another model to obtain the user data from the output of the inference model that contains the reconstruction information. Experimental results on six image datasets of different complexity show that LPIPS, which is the similarity metric between two images, offers a minimum value of 0.01. Additionally, the inference accuracy is maintained at the same level as that of normal training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. An, S., et al.: Mirror: model inversion for deep learning network with high fidelity. In: Proceedings of the 29th Network and Distributed System Security Symposium (2022)

    Google Scholar 

  2. Ateniese, G., Mancini, L.V., Spognardi, A., Villani, A., Vitali, D., Felici, G.: Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. Int. J. Secur. Netw. 10(3), 137–150 (2015)

    Article  Google Scholar 

  3. Berry, C., Komninos, N.: Efficient optimisation framework for convolutional neural networks with secure multiparty computation. Comput. Secur. 117, 102679 (2022). https://doi.org/10.1016/j.cose.2022.102679

    Article  Google Scholar 

  4. European Commission: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) (2016)

    Google Scholar 

  5. Fredrikson, M., Lantz, E., Jha, S., Lin, S.M., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: Proceedings of USENIX Security Symposium 2014, pp. 17–32. USENIX Association (2014)

    Google Scholar 

  6. Geiping, J., Bauermeister, H., Dröge, H., Moeller, M.: Inverting gradients-how easy is it to break privacy in federated learning? In: Advances in Neural Information Processing Systems, vol. 33, pp. 16937–16947 (2020)

    Google Scholar 

  7. Haim, N., Vardi, G., Yehudai, G., Shamir, O., Irani, M.: Reconstructing training data from trained neural networks. In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A. (eds.) Advances in Neural Information Processing Systems, vol. 35, pp. 22911–22924. Curran Associates, Inc. (2022)

    Google Scholar 

  8. Huang, Y., Gupta, S., Song, Z., Li, K., Arora, S.: Evaluating gradient inversion attacks and defenses in federated learning. In: Advances in Neural Information Processing Systems, vol. 34, pp. 7232–7241 (2021)

    Google Scholar 

  9. Huang, Y., Gupta, S., Song, Z., Li, K., Arora, S.: Evaluating gradient inversion attacks and defenses in federated learning. In: Advances in Neural Information Processing Systems (2021)

    Google Scholar 

  10. Hussain, S.U., Javaheripi, M., Samragh, M., Koushanfar, F.: Coinn: Crypto/ML codesign for oblivious inference via neural networks. In: Proceedings of CCS, pp. 3266–3281. ACM (2021)

    Google Scholar 

  11. Jagielski, M., Carlini, N., Berthelot, D., Kurakin, A., Papernot, N.: High accuracy and high fidelity extraction of neural networks. In: Proceedings of USENIX Security, pp. 1345–1362. USENIX Association (2020)

    Google Scholar 

  12. Juuti, M., Szyller, S., Marchal, S., Asokan, N.: Prada: Protecting against DNN model stealing attacks. In: Proceedings of EuroS &P 2019, pp. 512–527. IEEE (2019)

    Google Scholar 

  13. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013)

  14. Kumar, A., Tourani, R., Vij, M., Srikanteswara, S.: Sclera: a framework for privacy-preserving MLAAS at the pervasive edge. In: Proceedings of IEEE PerCom 2022 Workshops, pp. 175–180 (2022)

    Google Scholar 

  15. Malekzadeh, M., Borovykh, A., Gündüz, D.: Honest-but-curious nets: sensitive attributes of private inputs can be secretly coded into the classifiers’ outputs. In: Proceedings of CCS, pp. 825–844. ACM (2021)

    Google Scholar 

  16. Parisot, M.P., Pejo, B., Spagnuelo, D.: Property inference attacks on convolutional neural networks: influence and implications of target model’s complexity. arXiv preprint arXiv:2104.13061 (2021)

  17. Ranzato, M., Huang, F.J., Boureau, Y.L., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: Proceedings of CVPR, pp. 1–8. IEEE Computer Society (2007)

    Google Scholar 

  18. Rumelhart, D.E., McClelland, J.L.: Learning Internal Representations by Error Propagation, pp. 318–362 (1987)

    Google Scholar 

  19. Salem, A., Zhang, Y., Humbert, M., Berrang, P., Fritz, M., Backes, M.: ML-leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Proceedings of NDSS 2019. The Internet Society (2019)

    Google Scholar 

  20. Shen, T., et al.: SOTER: guarding black-box inference for general neural networks at the edge. In: Proceedings of USENIX ATC, pp. 723–738. USENIX Association (2022)

    Google Scholar 

  21. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: Proceedings of IEEE S &P 2018, pp. 3–18. IEEE Computer Society (2017)

    Google Scholar 

  22. Song, C., Shmatikov, V.: Overlearning reveals sensitive attributes. In: Proceedings of ICLR (2020)

    Google Scholar 

  23. Tramér, F., Zhang, F., Juels, A.: Stealing machine learning models via prediction APIs. In: Proceedings of USENIX Security 2016, pp. 601–618. USENIX Association (2016)

    Google Scholar 

  24. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  25. Yang, Z., Zhang, J., Chang, E.C., Liang, Z.: Neural network inversion in adversarial setting via background knowledge alignment. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. CCS ’19, pp. 225–240. Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3319535.3354261

  26. Yin, H., Mallya, A., Vahdat, A., Alvarez, J.M., Kautz, J., Molchanov, P.: See through gradients: Image batch recovery via gradinversion. In: Proceedings of CVPR, pp. 16332–16341. IEEE Computer Society (2021)

    Google Scholar 

  27. Yin, H., Mallya, A., Vahdat, A., Alvarez, J.M., Kautz, J., Molchanov, P.: See through gradients: image batch recovery via gradinversion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16337–16346 (2021)

    Google Scholar 

  28. Yin, H., et al.: Dreaming to distill: data-free knowledge transfer via deepinversion. In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020

    Google Scholar 

  29. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of CVPR 2018, pp. 586–595. IEEE Computer Society (2018)

    Google Scholar 

  30. Zhang, Y., Jia, R., Pei, H., Wang, W., Li, B., Song, D.: The secret revealer: generative model-inversion attacks against deep neural networks. In: Proceedings of CVPR, pp. 250–258. IEEE Computer Society (2020)

    Google Scholar 

  31. Zhao, B.Z.H., et al.: On the (in)feasibility of attribute inference attacks on machine learning models. In: 2021 IEEE European Symposium on Security and Privacy (EuroS &P), pp. 232–251 (2021). https://doi.org/10.1109/EuroSP51992.2021.00025

  32. Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kazuki Iwahana .

Editor information

Editors and Affiliations

Ethics declarations

Code Availability

Our code is publicly available via GitHub (https://github.com/kaz-iwahana/ReconstInputs).

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Iwahana, K., Saisho, O., Miura, T., Ito, A. (2024). Can Inputs’ Reconstruction Information Be Coded into Machine Learning Model’s Outputs?. In: Katsikas, S., et al. Computer Security. ESORICS 2023 International Workshops. ESORICS 2023. Lecture Notes in Computer Science, vol 14399. Springer, Cham. https://doi.org/10.1007/978-3-031-54129-2_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-54129-2_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-54128-5

  • Online ISBN: 978-3-031-54129-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics