Skip to main content

Protecting Bilateral Privacy in Machine Learning-as-a-Service: A Differential Privacy Based Defense

  • Conference paper
  • First Online:
Artificial Intelligence Security and Privacy (AIS&P 2023)

Abstract

With the continuous promotion and deepened application of Machine Learning-as-a-Service (MLaaS) across various societal domains, its privacy problems occur frequently and receive more and more attention from researchers. However, existing research focuses only on the client-side query privacy problem or only focuses on the server-side model privacy problem, and lacks a simultaneous focus on bilateral privacy defense schemes. In this paper, we design privacy-preserving mechanisms based on differential privacy for the client and server side respectively for the first time. By injecting noise into query requests and model responses, both the client and server sides in MLaaS are privacy-protected. Experimental results also demonstrate the effectiveness of the proposed solution in ensuring accuracy and providing privacy protection for both the clients and servers in MLaaS.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Reinsel, D., Gantz, J., Rydning, J.: Data Age 2025: the evolution of data to life-critical. https://www.seagate.com/files/www-content/our-story/trends/files/Seagate-WP-DataAge2025-March-2017.pdf. Accessed Aug 2023

  2. mordor intelligence, machine learning as a service (MLaaS) market size & share analysis-growth trends & forecasts (2023–2028). https://www.mordorintelligence.com/industry-reports/global-machine-learning-as-a-service-mlaas-market

  3. Amazon marketplace. https://aws.amazon.com/marketplace. Accessed Aug 2023

  4. Google cloud AI. https://cloud.google.com/solutions/ai. Accessed Aug 2023

  5. Azure machine learning. https://azure.microsoft.com/en-ca/free/machine-learning. Accessed Aug 2023

  6. Tanuwidjaja, H.C., Choi, R., Baek, S., et al.: Privacy-preserving deep learning on machine learning as a service-a comprehensive survey. IEEE Access 8, 167425–167447 (2020)

    Article  Google Scholar 

  7. De Cristofaro, E.: A critical overview of privacy in machine learning. IEEE Secur. Priv. 19(4), 19–27 (2021)

    Article  Google Scholar 

  8. Qayyum, A., Ijaz, A., Usama, M., et al.: Securing machine learning in the cloud: a systematic review of cloud machine learning security. Front. Big Data 3, 587139 (2020)

    Article  Google Scholar 

  9. Acar, G., Eubank, C., Englehardt, S., et al.: The web never forgets: persistent tracking mechanisms in the wild. In: Proceedings of the. ACM SIGSAC Conference on Computer and Communications Security, vol. 2014, pp. 674–689 (2014)

    Google Scholar 

  10. Tramèr, F., Zhang, F., Juels, A., et al.: Stealing machine learning models via prediction APIs. In: 25th USENIX Security Symposium (USENIX Security 16), pp. 601–618 (2016)

    Google Scholar 

  11. Chen, Q., Chai, Z., Wang, Z., et al.: QP-LDP for better global model performance in federated learning. In: 2022 18th International Conference on Mobility, Sensing and Networking (MSN). IEEE, pp. 422–426 (2022)

    Google Scholar 

  12. Chen, Q., Wang, H., Wang, Z., et al.: LLDP: a layer-wise local differential privacy in federated learning. In: 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). IEEE, pp. 631–637 (2022)

    Google Scholar 

  13. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)

    Google Scholar 

  14. Shokri, R., Stronati, M., Song, C., et al.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP). IEEE, pp. 3–18 (2017)

    Google Scholar 

  15. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321 (2015)

    Google Scholar 

  16. Mohassel, P., Zhang, Y.: SecureML: a system for scalable privacy-preserving machine learning. In: 2017 IEEE Symposium on Security and Privacy (SP). IEEE, pp. 19–38 (2017)

    Google Scholar 

  17. Hesamifard, E., Takabi, H., Ghasemi, M., et al.: Privacy-preserving machine learning in cloud. In: Proceedings of the 2017 on Cloud Computing Security Workshop, pp. 39–43 (2017)

    Google Scholar 

  18. Zheng, W., Popa, R.A., Gonzalez, J.E., et al.: Helen: maliciously secure coopetitive learning for linear models. In: 2019 IEEE Symposium on Security and Privacy (SP). IEEE, pp. 724–738 (2019)

    Google Scholar 

  19. Ristenpart, T., Tromer, E., Shacham, H., et al.: Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds. In: Proceedings of the 16th ACM Conference on Computer and Communications Security, pp. 199–212 (2009)

    Google Scholar 

  20. Sweeney, L.: k-anonymity: a model for protecting privacy. Internat. J. Uncertain. Fuzziness Knowl. Based Syst. 10(05), 557–570 (2002)

    Article  MathSciNet  Google Scholar 

  21. Machanavajjhala, A., Kifer, D., Gehrke, J., et al.: L-diversity: privacy beyond k-anonymity. ACM Trans. Knowl. Discov. (TKDD), 1(1), 3-es (2007)

    Google Scholar 

  22. Ribeiro, M., Grolinger, K., Capretz, M.A.M.: MLaaS: machine learning as a service. In: 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA). IEEE, pp. 896–902 (2015)

    Google Scholar 

  23. Weng, J., Weng, J., Cai, C., et al.: Golden grain: building a secure and decentralized model marketplace for MLaaS. IEEE Trans. Dependable Secure Comput. 19(5), 3149–3167 (2021)

    Article  Google Scholar 

  24. Jagielski, M., Carlini, N., Berthelot, D., et al.: High accuracy and high fidelity extraction of neural networks. In: 29th USENIX Security Symposium (USENIX Security 20), pp. 1345–1362 (2020)

    Google Scholar 

  25. Hardt, M., Ligett, K., McSherry, F.: A simple and practical algorithm for differentially private data release. In: Advances in Neural Information Processing Systems, vol. 25 (2012)

    Google Scholar 

  26. Gaboardi, M., Arias, E.J.G., Hsu, J., et al.: Dual query: practical private query release for high dimensional data. In: International Conference on Machine Learning. PMLR, pp. 1170–1178 (2014)

    Google Scholar 

  27. Vietri, G., Tian, G., Bun, M., et al.: New oracle-efficient algorithms for private synthetic data release. In: International Conference on Machine Learning. PMLR, pp. 9765–9774 (2020)

    Google Scholar 

  28. Zhang, Z., Wang, T., Li, N., et al.: PrivSyn: differentially private data synthesis. In: 30th USENIX Security Symposium (USENIX Security 21), pp. 929–946 (2021)

    Google Scholar 

  29. Gong, X., Wang, Q., Chen, Y., et al.: Model extraction attacks and defenses on cloud-based machine learning models. IEEE Commun. Mag. 58(12), 83–89 (2020)

    Article  Google Scholar 

  30. Lee, T., Edwards, B., Molloy, I., et al.: Defending against neural network model stealing attacks using deceptive perturbations. In: 2019 IEEE Security and Privacy Workshops (SPW). IEEE, pp. 43–49 (2019)

    Google Scholar 

  31. Orekondy, T., Schiele, B., Fritz, M.: Prediction poisoning: utility-constrained defenses against model stealing attacks. In: International Conference on Representation Learning (ICLR), vol. 2020 (2020)

    Google Scholar 

  32. Zheng, H., Ye, Q., Hu, H., Fang, C., Shi, J.: BDPL: a boundary differentially private layer against machine learning model extraction attacks. In: Sako, K., Schneider, S., Ryan, P.Y.A. (eds.) ESORICS 2019. LNCS, vol. 11735, pp. 66–83. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29959-0_4

    Chapter  Google Scholar 

  33. Yan, H., Li, X., Li, H., et al.: Monitoring-based differential privacy mechanism against query flooding-based model extraction attack. IEEE Trans. Dependable Secure Comput. 19(4), 2680–2694 (2021)

    Article  Google Scholar 

  34. Li, X., Yan, H., Cheng, Z., et al.: Protecting regression models with personalized local differential privacy. IEEE Trans. Dependable Secure Comput. 20(2), 960–974 (2022)

    Article  Google Scholar 

  35. Hardt, M., Rothblum, G.N.: A multiplicative weights mechanism for privacy-preserving data analysis. In: 2010 IEEE 51st Annual Symposium on Foundations of Computer Science. IEEE, pp. 61–70 (2010)

    Google Scholar 

  36. Warner, S.L.: Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60(309), 63–69 (1965)

    Article  Google Scholar 

  37. Pedregosa, F., Varoquaux, G., Gramfort, A., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  Google Scholar 

Download references

Acknowledgement

This project was supported in part by collaborative research funding from the National Research Council of Canada’s Artificial Intelligence for Logistics Program. Part of Haonan Yan’s work is done when he visits the School of Computer Science at the University of Guelph.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaodong Lin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, L., Yan, H., Lin, X., Xiong, P. (2024). Protecting Bilateral Privacy in Machine Learning-as-a-Service: A Differential Privacy Based Defense. In: Vaidya, J., Gabbouj, M., Li, J. (eds) Artificial Intelligence Security and Privacy. AIS&P 2023. Lecture Notes in Computer Science, vol 14509. Springer, Singapore. https://doi.org/10.1007/978-981-99-9785-5_17

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-9785-5_17

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-9784-8

  • Online ISBN: 978-981-99-9785-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics