Skip to main content

Explaining Federated Learning Through Concepts inĀ Image Classification

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14491))

  • 125 Accesses

Abstract

Federated learning is a machine learning framework that solves the problem of data silos under secure data protection measures and is gradually becoming a machine learning paradigm for future AI development. In recent years, federated learning has evolved in research areas such as security, model aggregation, and incentive mechanisms. However, the direction of interpretability of the model in the federated learning framework has not been explored. To bridge this gap, this paper proposes an interpretable model of Federated Concept Learning (FCL). FCL is trained on the client side using Bottleneck Concept Learner (BotCL) to generate human-understandable concepts. Each client uploads the co-occurrence scores of the concepts and classes obtained from the training to the server, and in order to mitigate the influence of possible malicious clients on the model, the server aggregates the obtained co-occurrence scores after optimization. The aggregated scores are then sent to the clients to update the model, which performs the classification task only by the presence or absence of the concepts. Experimental results show that our model has equivalent performance to other federated learning methods and successfully mitigates the impact of malicious client degree on the model performance, as well as provides an interpretation of the model classification results.

Code is available at https://github.com/jiaxin-shen/FCL..

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kim, J., Ha, H., Chun, B.-G., Yoon, S., Cha, S.K.: Collaborative analytics for data silos. In: 2016 IEEE 32nd International Conference on Data Engineering (ICDE), pp. 743ā€“754, IEEE (2016)

    Google ScholarĀ 

  2. Speiser, S., Harth, A.: Taking the lids off data silos. In: Proceedings of the 6th International Conference on Semantic Systems, pp. 1ā€“4 (2010)

    Google ScholarĀ 

  3. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform. Fusion 58, 82ā€“115 (2020)

    ArticleĀ  Google ScholarĀ 

  4. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436ā€“444 (2015)

    ArticleĀ  Google ScholarĀ 

  5. Fleisher, W.: Understanding, idealization, and explainable AI. Episteme 19(4), 534ā€“560 (2022)

    ArticleĀ  Google ScholarĀ 

  6. Gunning, D., Aha, D.: DARPAā€™s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44ā€“58 (2019)

    Google ScholarĀ 

  7. Long, G., Shen, T., Tan, Y., Gerrard, L., Clarke, A., Jiang, J.: Federated learning for privacy-preserving open innovation future on digital health. In: Chen, F., Zhou, J. (eds.) Humanity Driven AI, pp. 113ā€“133. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-72188-6_6

    ChapterĀ  Google ScholarĀ 

  8. Wang, H., et al.: Attack of the tails: yes, you really can backdoor federated learning. Adv. Neural. Inf. Process. Syst. 33, 16070ā€“16084 (2020)

    Google ScholarĀ 

  9. Xie, C., Huang, K., Chen, P.-Y., Li, B.: DBA: distributed backdoor attacks against federated learning. In: International Conference on Learning Representations (2020)

    Google ScholarĀ 

  10. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2938ā€“2948. PMLR (2020)

    Google ScholarĀ 

  11. Sun, Z., Kairouz, P., Suresh, A.T., McMahan, H.B.: Can you really backdoor federated learning?. arXiv preprint arXiv:1911.07963 (2019)

  12. Lyu, L., et al.: Privacy and robustness in federated learning: Attacks and defenses. IEEE Trans. Neural Netw. Learn. Syst. (2022)

    Google ScholarĀ 

  13. Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learning with adversaries: Byzantine tolerant gradient descent. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google ScholarĀ 

  14. Yin, D., Chen, Y., Kannan, R., Bartlett, P.: Byzantine-robust distributed learning: Towards optimal statistical rates. In: International Conference on Machine Learning, pp. 5650ā€“5659. PMLR (2018)

    Google ScholarĀ 

  15. Lamport, L., Shostak, R., Pease, M.: The byzantine generals problem. In: Concurrency: the works of leslie lamport, pp. 203ā€“226 (2019)

    Google ScholarĀ 

  16. Wang, B., Li, L., Nakashima, Y., Nagahara, H.: Learning bottleneck concepts in image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10962ā€“10971, June (2023)

    Google ScholarĀ 

  17. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial intelligence and statistics, pp. 1273ā€“1282, PMLR (2017)

    Google ScholarĀ 

  18. Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2, 429ā€“450 (2020)

    Google ScholarĀ 

  19. Jiang, Y., Konečnį»³, J., Rush, K., Kannan, S.: Improving federated learning personalization via model agnostic meta learning. arXiv preprint arXiv:1909.12488 (2019)

  20. Wang, G.: Interpret federated learning with shapley values. arXiv preprint arXiv:1905.04519 (2019)

  21. Gao, D., Ju, C., Wei, X., Liu, Y., Chen, T., Yang, Q.: HHHFL: hierarchical heterogeneous horizontal federated learning for electroencephalography. arXiv preprint arXiv:1909.05784 (2019)

  22. Nguyen, T.D., et al.: A federated self-learning anomaly detection system for IoT. In: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pp. 756ā€“767. IEEE (2019)

    Google ScholarĀ 

  23. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921ā€“2929 (2016)

    Google ScholarĀ 

  24. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618ā€“626 (2017)

    Google ScholarĀ 

  25. Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., Su, J.K.: This looks like that: deep learning for interpretable image recognition. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google ScholarĀ 

  26. Alvarez Melis D., Jaakkola, T.: Towards robust interpretability with self-explaining neural networks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google ScholarĀ 

  27. Wang, B., Li, L., Verma, M., Nakashima, Y., Kawasaki, R., Nagahara, H.: Match them up: visually explainable few-shot image classification. Appl. Intell., 1ā€“22 (2023). https://doi.org/10.1007/s10489-022-04072-4

  28. Ribeiro, M.T., Singh, S., Guestrin, C.: ā€œwhy should i trust you?ā€ explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135ā€“1144 (2016)

    Google ScholarĀ 

  29. Koh, P.W., et al.: Concept bottleneck models. In: International Conference on Machine Learning, pp. 5338ā€“5348. PMLR (2020)

    Google ScholarĀ 

  30. Li, L., Wang, B., Verma, M., Nakashima, Y., Kawasaki, R., Nagahara, H.: SCOUTER: slot attention-based classifier for explainable image recognition. In: IEEE International Conference on Computer Vision (ICCV) (2021)

    Google ScholarĀ 

  31. Deng, L.: The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29(6), 141ā€“142 (2012)

    ArticleĀ  Google ScholarĀ 

  32. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google ScholarĀ 

  33. Welinder, P., et al.: Caltech-UCSD birds 200 (2010)

    Google ScholarĀ 

  34. Beutel, D.J., et al.: Flower: a friendly federated learning research framework (2022)

    Google ScholarĀ 

Download references

Acknowledgment

This work is supported by the High-Performance Computing Center of Dalian Maritime University, and the Fundamental Research Funds for the Central Universities, JLU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyi Tao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shen, J., Tao, X., Li, L., Li, Z., Wang, B. (2024). Explaining Federated Learning Through Concepts inĀ Image Classification. In: Tari, Z., Li, K., Wu, H. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2023. Lecture Notes in Computer Science, vol 14491. Springer, Singapore. https://doi.org/10.1007/978-981-97-0808-6_19

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0808-6_19

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0807-9

  • Online ISBN: 978-981-97-0808-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics