Skip to main content

Medical Image Segmentation Based on Federated Distillation Optimization Learning on Non-IID Data

  • Conference paper
  • First Online:
Advanced Intelligent Computing Technology and Applications (ICIC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14088))

Included in the following conference series:

  • 809 Accesses

Abstract

Federated learning promises to alleviate this problem of low labelled data in medical image segmentation while protecting the privacy and security of the data. However, medical image segmentation under federated learning also has many problems, such as how to achieve high-precision segmentation using federated models in the presence of data imbalance, whether the communication efficiency in the federated process can be effectively improved, and how to effectively solve the model gradient explosion in federated distillation. Based on the above difficulties, this paper proposes a new optimization algorithm for federated distillation. First, we design a small-scale network model in the communication between the client and the central server to reduce the communication overhead; then, we design a distillation method to keep the local model stable. Finally, we add a coordinator for the central server before aggregation and introduce a model filtering mechanism to effectively filter and evaluate the client model parameters and weights to keep the global model optimization, while preventing the gradient explosion problem under malicious or extreme models and improving the accuracy of target domain segmentation. We conducted experiments on two medical image segmentation tasks and demonstrated that our approach achieves effective results on non-IID data, where the average DICE coefficient can reach 82.79% while the communication overhead is reduced by a factor of 16.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. McMahan, B., Moore, E., Ramage, D., et al.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  2. He, Y., Zenk, M., Fritz, M.: CosSGD: nonlinear quantization for communication-efficient federated learning. arXiv preprint arXiv:2012.08241 (2020)

  3. Zhou, Y., Ye, Q., Lv, J.: Communication-efficient federated learning with compensated overlap-fedavg. IEEE Trans. Parallel Distrib. Syst. 33(1), 192–205 (2021)

    Article  Google Scholar 

  4. Nishio, T., Yonetani, R.: Client selection for federated learning with heterogeneous resources in mobile edge. In: ICC 2019–2019 IEEE International Conference on Communications (ICC), pp. 1–7. IEEE (2019)

    Google Scholar 

  5. Wang, J., Liu, Q., Liang, H., et al.: Tackling the objective inconsistency problem in heterogeneous federated optimization. Adv. Neural Inf. Process. Syst. 33, 7611–7623 (2020)

    Google Scholar 

  6. Wang, H., Yurochkin, M., Sun, Y., et al.: Federated learning with matched averaging. arXiv preprint arXiv:2002.06440 (2020)

  7. Rahbar, A., Panahi, A., Bhattacharyya, C., et al.: On the unreasonable effectiveness of knowledge distillation: analysis in the kernel regime. arXiv preprint arXiv:2003.13438 (2020)

  8. Konečný, J., McMahan, H.B., Yu, F.X., et al.: Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016)

  9. Yang, Q., Liu, Y., Chen, T., et al.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. TIST 10(2), 1–19 (2019)

    Article  Google Scholar 

  10. Chang, Q., Qu, H., Zhang, Y., et al.: Synthetic learning: learn from distributed asynchronized discriminator gan without sharing medical image data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13856–13866 (2020)

    Google Scholar 

  11. Kaissis, G.A., Makowski, M.R., Rückert, D., et al.: Secure, privacy-preserving and federated machine learning in medical imaging. J. Nat. Mach. Intell. 2(6), 305–311 (2020)

    Article  Google Scholar 

  12. Ramadan, H., Lachqar, C., Tairi, H.: A survey of recent interactive image segmentation methods. Comput. Visual Media 6(4), 355–384 (2020). https://doi.org/10.1007/s41095-020-0177-5

    Article  Google Scholar 

  13. Li, M., Chen, Y., Wang, Y., et al.: Efficient asynchronous vertical federated learning via gradient prediction and double-end sparse compression. In: 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), pp. 291–296. IEEE (2020)

    Google Scholar 

  14. Chen, M., Mao, B., Ma, T.: Fedsa: a staleness-aware asynchronous federated learning algorithm with non-iid data. Future Gener. Comput. Syst. 120, 1–12 (2021)

    Article  Google Scholar 

  15. Huang, L., Yin, Y., Fu, Z., et al.: LoAdaBoost: Loss-based AdaBoost federated machine learning with reduced computational complexity on IID and non-IID intensive care data. PLoS ONE 15(4), e0230706 (2020)

    Article  Google Scholar 

  16. Lin, T., Kong, L., Stich, S.U., et al.: Ensemble distillation for robust model fusion in federated learning. Adv. Neural Inf. Process. Syst. 33, 2351–2363 (2020)

    Google Scholar 

  17. Yang, X., He, X., Zhao, J., et al.: COVID-CT-dataset: a CT scan dataset about COVID-19. arXiv preprint arXiv:2003.13865 (2020)

  18. Tschandl, P., Rosendahl, C., Kittler, H.: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific Data 5(1), 1–9 (2018)

    Article  Google Scholar 

  19. Jeong, E., Oh, S., Kim, H., et al.: Communication-efficient on-device machine learning: federated distillation and augmentation under non-iid private data. arXiv preprint arXiv:1811.11479 (2018)

  20. Lee, G., Jeong, M., Shin, Y., et al.: Preservation of the global knowledge by not-true distillation in federated learning. arXiv preprint arXiv:2106.03097 (2021)

Download references

Acknowledgement

This work was supported by the National Natural Science Foundation of China (No. 61861004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feng Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, F., Yang, F. (2023). Medical Image Segmentation Based on Federated Distillation Optimization Learning on Non-IID Data. In: Huang, DS., Premaratne, P., Jin, B., Qu, B., Jo, KH., Hussain, A. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2023. Lecture Notes in Computer Science, vol 14088. Springer, Singapore. https://doi.org/10.1007/978-981-99-4749-2_30

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-4749-2_30

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-4748-5

  • Online ISBN: 978-981-99-4749-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics