Skip to main content

PEVLR: A New Privacy-Preserving and Efficient Approach for Vertical Logistic Regression

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14450))

Included in the following conference series:

  • 469 Accesses

Abstract

In our paper, we consider logistic regression in vertical federated learning. A new algorithm called PEVLR (Privacy-preserving and Efficient Vertical Logistic Regression) is proposed to efficiently solve vertical logistic regression with privacy preservation. To enhance the communication and computational efficiency, we design a novel local-update and global-update scheme for party \(\mathcal{A}\) and party \(\mathcal{B}\), respectively. For the local update, we utilize hybrid SGD rather than vanilla SGD to mitigate the variance resulted from stochastic gradients. For the global update, full gradient is adopted to update the parameter of party \(\mathcal{B}\), which leads to faster convergence rate and fewer communication rounds. Furthermore, we design a simple but efficient plan to exchange intermediate information with privacy-preserving guarantee. Specifically, random matrix sketch and random selected permutations are utilized to ensure the security of original data, label information and parameters under honest-but-curious assumption. The experiment results show the advantages of PEVLR in terms of convergence rate, accuracy and efficiency, compared with other related models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.kaggle.com/c/GiveMeSomeCredit.

  2. 2.

    http://yann.lecun.com/exdb/mnist.

  3. 3.

    https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients.

References

  1. Benaissa, A., Retiat, B., Cebere, B., Belfedhal, A.E.: TenSeal: a library for encrypted tensor operations using homomorphic encryption (2021)

    Google Scholar 

  2. Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175–1191 (2017)

    Google Scholar 

  3. Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  4. Charikar, M., Chen, K., Farach-Colton, M.: Finding frequent items in data streams. In: Widmayer, P., Eidenbenz, S., Triguero, F., Morales, R., Conejo, R., Hennessy, M. (eds.) ICALP 2002. LNCS, vol. 2380, pp. 693–703. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45465-9_59

    Chapter  Google Scholar 

  5. Chen, H., Laine, K., Rindal, P.: Fast private set intersection from homomorphic encryption. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1243–1255 (2017)

    Google Scholar 

  6. Cheon, J.H., Kim, A., Kim, M., Song, Y.: Homomorphic encryption for arithmetic of approximate numbers. In: Takagi, T., Peyrin, T. (eds.) ASIACRYPT 2017. LNCS, vol. 10624, pp. 409–437. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70694-8_15

    Chapter  Google Scholar 

  7. Cutkosky, A., Orabona, F.: Momentum-based variance reduction in non-convex SGD. IN: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  8. Defazio, A., Bach, F., Lacoste-Julien, S.: SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. In: Advances in Neural Information Processing Systems, vol. 27 (2014)

    Google Scholar 

  9. Gu, B., Xu, A., Huo, Z., Deng, C., Huang, H.: Privacy-preserving asynchronous vertical federated learning algorithms for multiparty collaborative learning. IEEE Trans. Neural Netw. Learn. Syst. (2021)

    Google Scholar 

  10. Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217–288 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Hardy, S., et al.: Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. arXiv preprint arXiv:1711.10677 (2017)

  12. Hu, Y., Niu, D., Yang, J., Zhou, S.: FDML: a collaborative machine learning framework for distributed features. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2232–2240 (2019)

    Google Scholar 

  13. Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. In: Advances in Neural Information Processing Systems, vol. 26 (2013)

    Google Scholar 

  14. Lei, L., Jordan, M.: Less than a single pass: stochastically controlled stochastic gradient. In: Artificial Intelligence and Statistics, pp. 148–156. PMLR (2017)

    Google Scholar 

  15. Lindell, Y.: Secure multiparty computation for privacy preserving data mining. In: Encyclopedia of Data Warehousing and Mining, pp. 1005–1009. IGI global (2005)

    Google Scholar 

  16. Liu, Y., et al.: FedBCD: a communication-efficient collaborative learning framework for distributed features. IEEE Trans. Sig. Process. 70, 4277–4290 (2022)

    Article  MathSciNet  Google Scholar 

  17. Mahoney, M.W., et al.: Randomized algorithms for matrices and data. Found. Trends® Mach. Learn. 3(2), 123–224 (2011)

    Google Scholar 

  18. Mohassel, P., Zhang, Y.: SecureML: a system for scalable privacy-preserving machine learning. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 19–38. IEEE (2017)

    Google Scholar 

  19. Nguyen, L.M., Liu, J., Scheinberg, K., Takáč, M.: SARAH: a novel method for machine learning problems using stochastic recursive gradient. In: International Conference on Machine Learning, pp. 2613–2621. PMLR (2017)

    Google Scholar 

  20. Paillier, P.: Public-key cryptosystems based on composite degree residuosity classes. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, pp. 223–238. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48910-X_16

    Chapter  Google Scholar 

  21. Rivest, R.L., Adleman, L., Dertouzos, M.L., et al.: On data banks and privacy homomorphisms. Found. Secure Comput. 4(11), 169–180 (1978)

    MathSciNet  Google Scholar 

  22. Roux, N., Schmidt, M., Bach, F.: A stochastic gradient method with an exponential convergence rate for finite training sets. In: Advances in Neural Information Processing Systems, vol. 25 (2012)

    Google Scholar 

  23. Sabt, M., Achemlal, M., Bouabdallah, A.: Trusted execution environment: what it is, and what it is not. In: 2015 IEEE Trustcom/BigDataSE/ISPA, vol. 1, pp. 57–64. IEEE (2015)

    Google Scholar 

  24. Sun, H., Wang, Z., Huang, Y., Ye, J.: Privacy-preserving vertical federated logistic regression without trusted third-party coordinator. In: 2022 The 6th International Conference on Machine Learning and Soft Computing, pp. 132–138 (2022)

    Google Scholar 

  25. Tran-Dinh, Q., Pham, N.H., Phan, D.T., Nguyen, L.M.: Hybrid stochastic gradient descent algorithms for stochastic nonconvex optimization. arXiv preprint arXiv:1905.05920 (2019)

  26. Wan, L., Ng, W.K., Han, S., Lee, V.C.: Privacy-preservation for gradient descent methods. In: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 775–783 (2007)

    Google Scholar 

  27. Wei, Q., Li, Q., Zhou, Z., Ge, Z., Zhang, Y.: Privacy-preserving two-parties logistic regression on vertically partitioned data using asynchronous gradient sharing. Peer-to-Peer Network. Appl. 14(3), 1379–1387 (2021)

    Article  Google Scholar 

  28. Woodruff, D.P., et al.: Sketching as a tool for numerical linear algebra. Found. Trends® Theor. Comput. Sci. 10(1–2), 1–157 (2014)

    Google Scholar 

  29. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1–19 (2019)

    Article  Google Scholar 

  30. Yang, S., Ren, B., Zhou, X., Liu, L.: Parallel distributed logistic regression for vertical federated learning without third-party coordinator. arXiv preprint arXiv:1911.09824 (2019)

  31. Yao, A.C.: Protocols for secure computations. In: 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982), pp. 160–164. IEEE (1982)

    Google Scholar 

  32. Zhang, G.D., Zhao, S.Y., Gao, H., Li, W.J.: Feature-distributed SVRG for high-dimensional linear classification. arXiv preprint arXiv:1802.03604 (2018)

  33. Zhang, M., Wang, S.: Matrix sketching for secure collaborative machine learning. In: International Conference on Machine Learning, pp. 12589–12599. PMLR (2021)

    Google Scholar 

  34. Zhao, D., Yao, M., Wang, W., He, H., Jin, X.: NTP-VFL-A new scheme for non-3rd party vertical federated learning. In: 2022 14th International Conference on Machine Learning and Computing (ICMLC), pp. 134–139 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sihan Mao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mao, S., Zheng, X., Zhang, J., Hu, X. (2024). PEVLR: A New Privacy-Preserving and Efficient Approach for Vertical Logistic Regression. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Lecture Notes in Computer Science, vol 14450. Springer, Singapore. https://doi.org/10.1007/978-981-99-8070-3_29

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8070-3_29

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8069-7

  • Online ISBN: 978-981-99-8070-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics