Abstract
Federated learning is a machine learning framework that solves the problem of data silos under secure data protection measures and is gradually becoming a machine learning paradigm for future AI development. In recent years, federated learning has evolved in research areas such as security, model aggregation, and incentive mechanisms. However, the direction of interpretability of the model in the federated learning framework has not been explored. To bridge this gap, this paper proposes an interpretable model of Federated Concept Learning (FCL). FCL is trained on the client side using Bottleneck Concept Learner (BotCL) to generate human-understandable concepts. Each client uploads the co-occurrence scores of the concepts and classes obtained from the training to the server, and in order to mitigate the influence of possible malicious clients on the model, the server aggregates the obtained co-occurrence scores after optimization. The aggregated scores are then sent to the clients to update the model, which performs the classification task only by the presence or absence of the concepts. Experimental results show that our model has equivalent performance to other federated learning methods and successfully mitigates the impact of malicious client degree on the model performance, as well as provides an interpretation of the model classification results.
Code is available at https://github.com/jiaxin-shen/FCL..
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kim, J., Ha, H., Chun, B.-G., Yoon, S., Cha, S.K.: Collaborative analytics for data silos. In: 2016 IEEE 32nd International Conference on Data Engineering (ICDE), pp. 743ā754, IEEE (2016)
Speiser, S., Harth, A.: Taking the lids off data silos. In: Proceedings of the 6th International Conference on Semantic Systems, pp. 1ā4 (2010)
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform. Fusion 58, 82ā115 (2020)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436ā444 (2015)
Fleisher, W.: Understanding, idealization, and explainable AI. Episteme 19(4), 534ā560 (2022)
Gunning, D., Aha, D.: DARPAās explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44ā58 (2019)
Long, G., Shen, T., Tan, Y., Gerrard, L., Clarke, A., Jiang, J.: Federated learning for privacy-preserving open innovation future on digital health. In: Chen, F., Zhou, J. (eds.) Humanity Driven AI, pp. 113ā133. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-72188-6_6
Wang, H., et al.: Attack of the tails: yes, you really can backdoor federated learning. Adv. Neural. Inf. Process. Syst. 33, 16070ā16084 (2020)
Xie, C., Huang, K., Chen, P.-Y., Li, B.: DBA: distributed backdoor attacks against federated learning. In: International Conference on Learning Representations (2020)
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2938ā2948. PMLR (2020)
Sun, Z., Kairouz, P., Suresh, A.T., McMahan, H.B.: Can you really backdoor federated learning?. arXiv preprint arXiv:1911.07963 (2019)
Lyu, L., et al.: Privacy and robustness in federated learning: Attacks and defenses. IEEE Trans. Neural Netw. Learn. Syst. (2022)
Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learning with adversaries: Byzantine tolerant gradient descent. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Yin, D., Chen, Y., Kannan, R., Bartlett, P.: Byzantine-robust distributed learning: Towards optimal statistical rates. In: International Conference on Machine Learning, pp. 5650ā5659. PMLR (2018)
Lamport, L., Shostak, R., Pease, M.: The byzantine generals problem. In: Concurrency: the works of leslie lamport, pp. 203ā226 (2019)
Wang, B., Li, L., Nakashima, Y., Nagahara, H.: Learning bottleneck concepts in image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10962ā10971, June (2023)
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial intelligence and statistics, pp. 1273ā1282, PMLR (2017)
Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2, 429ā450 (2020)
Jiang, Y., KoneÄnį»³, J., Rush, K., Kannan, S.: Improving federated learning personalization via model agnostic meta learning. arXiv preprint arXiv:1909.12488 (2019)
Wang, G.: Interpret federated learning with shapley values. arXiv preprint arXiv:1905.04519 (2019)
Gao, D., Ju, C., Wei, X., Liu, Y., Chen, T., Yang, Q.: HHHFL: hierarchical heterogeneous horizontal federated learning for electroencephalography. arXiv preprint arXiv:1909.05784 (2019)
Nguyen, T.D., et al.: A federated self-learning anomaly detection system for IoT. In: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pp. 756ā767. IEEE (2019)
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921ā2929 (2016)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618ā626 (2017)
Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., Su, J.K.: This looks like that: deep learning for interpretable image recognition. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Alvarez Melis D., Jaakkola, T.: Towards robust interpretability with self-explaining neural networks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Wang, B., Li, L., Verma, M., Nakashima, Y., Kawasaki, R., Nagahara, H.: Match them up: visually explainable few-shot image classification. Appl. Intell., 1ā22 (2023). https://doi.org/10.1007/s10489-022-04072-4
Ribeiro, M.T., Singh, S., Guestrin, C.: āwhy should i trust you?ā explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135ā1144 (2016)
Koh, P.W., et al.: Concept bottleneck models. In: International Conference on Machine Learning, pp. 5338ā5348. PMLR (2020)
Li, L., Wang, B., Verma, M., Nakashima, Y., Kawasaki, R., Nagahara, H.: SCOUTER: slot attention-based classifier for explainable image recognition. In: IEEE International Conference on Computer Vision (ICCV) (2021)
Deng, L.: The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29(6), 141ā142 (2012)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
Welinder, P., et al.: Caltech-UCSD birds 200 (2010)
Beutel, D.J., et al.: Flower: a friendly federated learning research framework (2022)
Acknowledgment
This work is supported by the High-Performance Computing Center of Dalian Maritime University, and the Fundamental Research Funds for the Central Universities, JLU.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Shen, J., Tao, X., Li, L., Li, Z., Wang, B. (2024). Explaining Federated Learning Through Concepts inĀ Image Classification. In: Tari, Z., Li, K., Wu, H. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2023. Lecture Notes in Computer Science, vol 14491. Springer, Singapore. https://doi.org/10.1007/978-981-97-0808-6_19
Download citation
DOI: https://doi.org/10.1007/978-981-97-0808-6_19
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-0807-9
Online ISBN: 978-981-97-0808-6
eBook Packages: Computer ScienceComputer Science (R0)