Skip to main content

Strict Differentially Private Support Vector Machines with Dimensionality Reduction

  • Conference paper
  • First Online:
Artificial Intelligence Security and Privacy (AIS&P 2023)

Abstract

With the widespread data collection and processing, privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals. Support vector machine (SVM) is one of the most elementary learning models of machine learning. Privacy issues surrounding SVM training classifiers have attracted increasing attention. In this paper, we propose DPDR-DPSVM which is a strict differentially private support vector machine algorithm with high data utility. Aiming at high-dimensional data, we adopt differential privacy in both the dimensionality reduction phase and SVM classifier training phase, which improves model accuracy while achieving strong privacy guarantees. Besides, we train DP-compliant SVM classifiers by adding noise to the objective function itself, thus leading to better data utility. Extensive experiments on three high-dimensional datasets demonstrate that DPDR-DPSVM can achieve high accuracy while ensuring strong privacy protection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cifar-10 dataset. https://www.cs.toronto.edu/~kriz/cifar.html

  2. Al-Rubaie, M., Chang, J.M.: Privacy-preserving machine learning: threats and solutions. IEEE Secur. Priv. 17(2), 49–58 (2019)

    Article  Google Scholar 

  3. Chapelle, O.: Training a support vector machine in the primal. Neural Comput. 19(5), 1155–1178 (2007)

    Article  MathSciNet  Google Scholar 

  4. Chaudhuri, K., Monteleoni, C., Sarwate, A.D.: Differentially private empirical risk minimization. J. Mach. Learn. Res. 12(3), 1069–1109 (2011)

    Google Scholar 

  5. Dwork, C.: Differential privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006). https://doi.org/10.1007/11787006_1

    Chapter  Google Scholar 

  6. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14

    Chapter  Google Scholar 

  7. Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends® Theoret. Comput. Sci. 9(3–4), 211–407 (2014)

    Google Scholar 

  8. Dwork, C., Talwar, K., Thakurta, A., Zhang, L.: Analyze gauss: optimal bounds for privacy-preserving principal component analysis. In: Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing, pp. 11–20 (2014)

    Google Scholar 

  9. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)

    Google Scholar 

  10. Huang, Y., Yang, G., Xu, Y., Zhou, H.: Differential privacy principal component analysis for support vector machines. Secur. Commun. Netw. 2021, 1–12 (2021)

    Google Scholar 

  11. Ji, Z., Lipton, Z.C., Elkan, C.: Differential privacy and machine learning: a survey and review. arXiv preprint arXiv:1412.7584 (2014)

  12. Jiang, W., Xie, C., Zhang, Z.: Wishart mechanism for differentially private principal components analysis. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)

    Google Scholar 

  13. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  14. McSherry, F., Talwar, K.: Mechanism design via differential privacy. In: 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS2007), pp. 94–103. IEEE (2007)

    Google Scholar 

  15. Mohamadi, S., Mujtaba, G., Le, N., Doretto, G., Adjeroh, D.A.: ChatGPT in the age of generative AI and large language models: a concise survey. arXiv preprint arXiv:2307.04251 (2023)

  16. Ponomareva, N., et al.: How to DP-fy ML: a practical guide to machine learning with differential privacy. J. Artif. Intell. Res. 77, 1113–1201 (2023)

    Article  MathSciNet  Google Scholar 

  17. Sun, Z., Yang, J., Li, X., et al.: Differentially private singular value decomposition for training support vector machines. Comput. Intell. Neurosci. 2022, 2935975 (2022)

    Google Scholar 

  18. Tanuwidjaja, H.C., Choi, R., Baek, S., Kim, K.: Privacy-preserving deep learning on machine learning as a service-a comprehensive survey. IEEE Access 8, 167425–167447 (2020)

    Article  Google Scholar 

  19. Wang, Y., Pan, Y., Yan, M., Su, Z., Luan, T.H.: A survey on chatGPT: AI-generated contents, challenges, and solutions. IEEE Open J. Comput. Soc. 1–20 (2023). https://doi.org/10.1109/OJCS.2023.3300321

  20. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms (2017)

    Google Scholar 

  21. Xu, Y., Yang, G., Bai, S.: Laplace input and output perturbation for differentially private principal components analysis. Secur. Commun. Netw. 2019, 1–10 (2019)

    Google Scholar 

  22. Zhang, X., Chen, C., Xie, Y., Chen, X., Zhang, J., Xiang, Y.: A survey on privacy inference attacks and defenses in cloud-based deep neural network. Comput. Stand. Interfaces 83, 103672 (2023)

    Article  Google Scholar 

  23. Zhu, T., Ye, D., Wang, W., Zhou, W., Philip, S.Y.: More than privacy: applying differential privacy in key areas of artificial intelligence. IEEE Trans. Knowl. Data Eng. 34(6), 2824–2843 (2020)

    Google Scholar 

Download references

Acknowledgment

This work was supported in part by National Natural Science Foundation of China (No. 62102311) and in part by Natural Science Basic Research Program of Shaanxi (Program No. 2022JQ-600).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Teng Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, T., Liu, S., Liang, J., Wang, S., Wang, L., Song, J. (2024). Strict Differentially Private Support Vector Machines with Dimensionality Reduction. In: Vaidya, J., Gabbouj, M., Li, J. (eds) Artificial Intelligence Security and Privacy. AIS&P 2023. Lecture Notes in Computer Science, vol 14509. Springer, Singapore. https://doi.org/10.1007/978-981-99-9785-5_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-9785-5_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-9784-8

  • Online ISBN: 978-981-99-9785-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics