Skip to main content

FLFHNN: An Efficient and Flexible Vertical Federated Learning Framework for Heterogeneous Neural Network

  • Conference paper
  • First Online:
Wireless Algorithms, Systems, and Applications (WASA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13471))

  • 1787 Accesses

Abstract

The emergence of vertical federated learning (VFL) solves the problem of joint modeling between participants sharing the same ID space and different feature spaces. Privacy-preserving (PP) VFL is challenging because complete sets of labels and features are not owned by the same entity, and more frequent and direct interactions are required between participants. The existing VFL PP schemes are often limited by the communication cost, the model types supported, and the number of participants. We propose FLFHNN, a novel PP framework for heterogeneous neural networks based on CKKS fully homomorphic encryption (FHE). Combining the advantages of FHE in supporting types of ciphertext calculation, FLFHNN eliminates the limitation that the algorithm only supports limited generalized linear models and realizes “short link” communication between participants, and adopts the training and inference of encrypted state to ensure confidentiality of the shared information while solving the problem of potential leakage from the aggregated values of federated learning. In addition, FLFHNN supports flexible expansion to multi-party scenarios, and its algorithm adapts according to the number of participants. Our analysis and experiments demonstrate that compared with Paillier based scheme, FLFHNN significantly reduces the communication cost of the system on the premise of retaining the accuracy of the model, and the required interactions and information transmission for training are reduced by almost 2/3 and more than 30% respectively, which is suited to large-scale internet of things scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. McMahan, H.B., et al.: Federated learning of deep networks using model averaging. CoRR abs/1602.05629 (2016)

    Google Scholar 

  2. Konečný, J., et al.: Federated learning: strategies for improving communication efficiency. CoRR abs/1610.05492 (2016)

    Google Scholar 

  3. McMahan, B., et al.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  4. Ramaswamy, S., et al.: Federated learning for emoji prediction in a mobile keyboard. CoRR abs/1906.04329(5) (2019)

    Google Scholar 

  5. Yang, Q., et al.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 12:1–12:19 (2019)

    Google Scholar 

  6. Zhu, L., Han, S.: Deep leakage from gradients. In: Yang, Q., Fan, L., Yu, H. (eds.) Federated Learning. LNCS (LNAI), vol. 12500, pp. 17–31. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-63076-8_2

    Chapter  Google Scholar 

  7. Phong, L.T., et al.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2018)

    Article  Google Scholar 

  8. Melis, L., et al.: Exploiting unintended feature leakage in collaborative learning. In: SP, pp. 691–706. IEEE (2019)

    Google Scholar 

  9. Hitaj, B., et al.: Deep models under the GAN: information leakage from collaborative deep learning. In: CCS, pp. 603–618. ACM (2017)

    Google Scholar 

  10. Wang, Z., et al.: Beyond inferring class representatives: user-level privacy leakage from federated learning. In: INFOCOM, pp. 2512–2520. IEEE (2019)

    Google Scholar 

  11. Gascón, A., et al.: Secure linear regression on vertically partitioned datasets. IACR Cryptology ePrint Archive 892 (2016)

    Google Scholar 

  12. Chen, T., et al.: VAFL: a method of vertical asynchronous federated learning. CoRR abs/2007.06081 (2020)

    Google Scholar 

  13. Wang, C., et al.: Hybrid differentially private federated learning on vertically partitioned data. CoRR abs/2009.02763 (2020)

    Google Scholar 

  14. Hardy, S., et al.: Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. CoRR abs/1711.10677 (2017)

    Google Scholar 

  15. Yang, K., et al.: A quasi-newton method based vertical federated learning framework for logistic regression. CoRR abs/1912.00513 (2019)

    Google Scholar 

  16. Nasr, M., et al.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: SP, pp. 739–753. IEEE (2019)

    Google Scholar 

  17. WeBank AI Department. https://github.com/FederatedAI/FATE. Accessed 9 May 2022

  18. Zhang, Y., Zhu, H.: Additively homomorphical encryption based deep neural network for asymmetrically collaborative machine learning. CoRR abs/2007.06849 (2020)

    Google Scholar 

  19. Zhang, Q., et al.: GELU-Net: a globally encrypted, locally unencrypted deep neural network for privacy-preserved learning. In: IJCAI, pp. 3933–3939. ijcai.org (2018)

    Google Scholar 

  20. Gu, B., et al.: Federated doubly stochastic kernel learning for vertically partitioned data. In: KDD, pp. 2483–2493. ACM (2020)

    Google Scholar 

  21. Zhang, Q., et al.: Secure bilevel asynchronous vertical federated learning with backward updating. In: AAAI, pp. 10896–10904. AAAI Press (2021)

    Google Scholar 

  22. Kim, M., et al.: Secure logistic regression based on homomorphic encryption. IACR Cryptology ePrint Archive 74 (2018)

    Google Scholar 

  23. Damgård, I., Jurik, M.: A generalisation, a simplification and some applications of Paillier’s probabilistic public-key system. In: Kim, K. (ed.) PKC 2001. LNCS, vol. 1992, pp. 119–136. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44586-2_9

    Chapter  MATH  Google Scholar 

  24. Gilad-Bachrach, R., et al.: CryptoNets: applying neural networks to encrypted data with high throughput and accuracy. In: ICML, pp. 201–210. JMLR.org (2016)

    Google Scholar 

  25. Hesamifard, E., et al.: CryptoDL: deep neural networks over encrypted data. CoRR abs/1711.05189 (2017)

    Google Scholar 

  26. Chou, E., et al.: Faster CryptoNets: leveraging sparsity for real-world encrypted inference. CoRR abs/1811.09953 (2018)

    Google Scholar 

  27. Fan, J., Vercauteren, F.: Somewhat practical fully homomorphic encryption. IACR Cryptology ePrint Archive 144 (2012)

    Google Scholar 

  28. Zhang, G.-D., et al.: Feature-distributed SVRG for high-dimensional linear classification. CoRR abs/1802.03604 (2018)

    Google Scholar 

  29. Cheon, J.H., Kim, A., Kim, M., Song, Y.: Homomorphic encryption for arithmetic of approximate numbers. In: Takagi, T., Peyrin, T. (eds.) ASIACRYPT 2017. LNCS, vol. 10624, pp. 409–437. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70694-8_15

    Chapter  Google Scholar 

  30. OpenMined. https://github.com/OpenMined/TenSEAL. Accessed 9 May 2022

  31. UCI Machine Learning Repository. https://archive.ics.uci.edu/ml/datasets.php. Accessed 9 May 2022

Download references

Acknowledgements

This work is supported by the Cooperation project between Chongqing Municipal undergraduate universities and institutes affiliated to CAS (HZ2021015).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, H., Zhang, Y., Li, M., Xu, Z. (2022). FLFHNN: An Efficient and Flexible Vertical Federated Learning Framework for Heterogeneous Neural Network. In: Wang, L., Segal, M., Chen, J., Qiu, T. (eds) Wireless Algorithms, Systems, and Applications. WASA 2022. Lecture Notes in Computer Science, vol 13471. Springer, Cham. https://doi.org/10.1007/978-3-031-19208-1_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19208-1_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19207-4

  • Online ISBN: 978-3-031-19208-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics