Skip to main content

RRML: Privacy Preserving Machine Learning Based on Random Response Technology

  • Conference paper
  • First Online:
Network and System Security (NSS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13983))

Included in the following conference series:

  • 445 Accesses

Abstract

Machine learning algorithms are proven to be vulnerable to model inversion and membership inference attacks, which raises much privacy concerns for its applications in sensitive scenarios. Typically, the state-of-the-art privacy preserving machine learning methods tend to provide privacy protection guarantee at the cost of losing data utility. This inevitably causes degradation of the model performance, especially for those tasks who are trained using small data sets. Therefore, optimizations on the trade-offs between individual privacy and data utility become a critical issue in machine learning. In this work, we proposed a privacy preserving machine learning algorithm RRML (Random Response Machine Learning) by combining the random response mechanism with the semi-supervised teachers-student learning way, and give the privacy analysis. Extensive experiments have been conducted to validate the effectiveness of RRML in addressing the above mentioned problem. The experimental results confirmed its superiority in balancing data utility and privacy against the state-of-the-art privacy preserving machine learning algorithms, especially in small data scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318. ACM (2016)

    Google Scholar 

  2. Aono, Y., Hayashi, T., Wang, L., Moriai, S., et al.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2017)

    Google Scholar 

  3. Ateniese, G., Mancini, L.V., Spognardi, A., Villani, A., Vitali, D., Felici, G.: Hacking smart machines with smarter ones: how to extract meaningful data from machine learning classifiers. Int. J. Secur. Netw. 10(3), 137 (2015)

    Article  Google Scholar 

  4. Carlini, N., Liu, C., Erlingsson, Ú., Kos, J., Song, D.: The secret sharer: evaluating and testing unintended memorization in neural networks. In: 28th USENIX Security Symposium (USENIX Security 2019), pp. 267–284 (2019)

    Google Scholar 

  5. Che, Z., Purushotham, S., Cho, K., Sontag, D., Liu, Y.: Recurrent neural networks for multivariate time series with missing values. Sci. Rep. 8(1), 1–12 (2018)

    Article  Google Scholar 

  6. Dwork, C.: Differential privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006). https://doi.org/10.1007/11787006_1

    Chapter  Google Scholar 

  7. Esteva, A., et al.: A guide to deep learning in healthcare. Nat. Med. 25(1), 24–29 (2019)

    Article  Google Scholar 

  8. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333. ACM (2015)

    Google Scholar 

  9. Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: LOGAN: evaluating information leakage of generative models using generative adversarial networks. arXiv preprint arXiv:1705.07663 (2017)

  10. Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 603–618 (2017)

    Google Scholar 

  11. Liu, J., Zhang, Z., Razavian, N.: Deep EHR: chronic disease prediction using medical notes. arXiv preprint: arXiv:1808.04928 (2018)

  12. Long, Y., Lin, S., Yang, Z., Gunter, C.A., Li, B.: Scalable differentially private generative student model via pate. arXiv preprint arXiv:1906.09338 (2019)

  13. Mahdavinejad, M.S., Rezvan, M., Barekatain, M., Adibi, P., Barnaghi, P., Sheth, A.P.: Machine learning for internet of things data analysis: a survey. Digit. Commun. Netw. 4(3), 161–175 (2018)

    Article  Google Scholar 

  14. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 739–753. IEEE (2019)

    Google Scholar 

  15. Nicolas, P., Shuang, S., Ilya, M., Ananth, R., Kunal, T., Úlfar, E.: Scalable private learning with pate. arXiv preprint arXiv:1802.08908 (2018)

  16. Ozbayoglu, A.M., Gudelek, M.U., Sezer, O.B.: Deep learning for financial applications: a survey. Appl. Soft Comput. 106384 (2020)

    Google Scholar 

  17. Papernot, N.: A marauder’s map of security and privacy in machine learning. arXiv preprint arXiv:1811.01134 (2018)

  18. Papernot, N., Abadi, M., Erlingsson, U., Goodfellow, I., Talwar, K.: Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755 (2016)

  19. Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning: revisited and enhanced. In: Batten, L., Kim, D.S., Zhang, X., Li, G. (eds.) ATIS 2017. CCIS, vol. 719, pp. 100–110. Springer, Singapore (2017). https://doi.org/10.1007/978-981-10-5421-1_9

    Chapter  Google Scholar 

  20. Pyrgelis, A., Troncoso, C., De Cristofaro, E.: Knock knock, who’s there? Membership inference on aggregate location data. arXiv preprint arXiv:1708.06145 (2017)

  21. Rajkomar, A., et al.: Scalable and accurate deep learning with electronic health records. NPJ Digit. Med. 1(1), 1–10 (2018)

    Article  Google Scholar 

  22. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: ACM Conference on Computer and Communications Security (CCS) (2015)

    Google Scholar 

  23. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

  24. Song, C., Ristenpart, T., Shmatikov, V.: Machine learning models that remember too much. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 587–601 (2017)

    Google Scholar 

  25. Suresh, H., Hunt, N., Johnson, A., Celi, L.A., Szolovits, P., Ghassemi, M.: Clinical intervention prediction and understanding with deep neural networks. In: Machine Learning for Healthcare Conference, pp. 322–337. PMLR (2017)

    Google Scholar 

  26. Tai, C.H., Tseng, P.J., Yu, P.S., Chen, M.S.: Identity protection in sequential releases of dynamic networks. IEEE Trans. Knowl. Data Eng. 26(3), 635–651 (2014)

    Article  Google Scholar 

  27. Veale, M., Binns, R., Edwards, L.: Algorithms that remember: model inversion attacks and data protection law. Philos. Trans. R. Soc. A: Math. Phys. Eng. Sci. 376(2133), 20180083 (2018)

    Article  Google Scholar 

  28. Wang, J., Zhang, J., Bao, W., Zhu, X., Cao, B., Yu, P.S.: Not just privacy: improving performance of private deep learning in mobile cloud. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, pp. 2407–2416. Association for Computing Machinery, New York (2018). https://doi.org/10.1145/3219819.3220106

  29. Wang, W., Siau, K.: Artificial intelligence, machine learning, automation, robotics, future of work and future of humanity: a review and research agenda. J. Database Manag. (JDM) 30(1), 61–79 (2019)

    Article  Google Scholar 

  30. Wang, Y., Wu, X., Hu, D.: Using randomized response for differential privacy preserving data collection. In: EDBT/ICDT Workshops, vol. 1558 (2016)

    Google Scholar 

  31. Wang, Z., Song, M., Zhang, Z., Song, Y., Wang, Q., Qi, H.: Beyond inferring class representatives: user-level privacy leakage from federated learning. In: IEEE Conference on Computer Communications, IEEE INFOCOM 2019, pp. 2512–2520. IEEE (2019)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Nature Science Foundation of China under Grant No. 6197226, the Natural Science Foundation of Guangdong Province under Grant No. 2021A1515011153, and the Shenzhen Science and Technology Innovation Commission under Grant No. 20200805142159001, No. JCYJ20220531103401003.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jia Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, J., He, S., Lin, Q. (2023). RRML: Privacy Preserving Machine Learning Based on Random Response Technology. In: Li, S., Manulis, M., Miyaji, A. (eds) Network and System Security. NSS 2023. Lecture Notes in Computer Science, vol 13983. Springer, Cham. https://doi.org/10.1007/978-3-031-39828-5_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-39828-5_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-39827-8

  • Online ISBN: 978-3-031-39828-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics