ABSTRACT
Considered the development paradigm of the next generation of the Internet after the network and mobile Internet revolution, the metaverse has received extensive attention. However, potential security problems in the metaverse hinder its further development. Among them, federated learning (FL), as a distributed learning framework that shares the important asset User Generated Content (UGC) in the meta-universe, is unable to withstand hidden poisoning attacks and is still threatened by security vulnerabilities. Therefore, this paper proposes a cohort-based credit evaluation method for federated learning (CoCE) to address the privacy and integrity deficiencies of traditional federal learning. In this article, by constructing cohorts, we alleviate the problem of uneven data distribution in federated learning and achieve multitask learning. In addition, we combine cohort mechanisms to evaluate the comprehensive credit of clients in terms of the number of customers, data distribution, and model preferences, and to screen out potential attackers in the course of federated learning. Finally, we have carried out experiments on open datasets to verify the robustness and superiority of CoCE.
- Sébastien Andreina, Giorgia Azzurra Marson, Helen Möllering, and Ghassan Karame. 2021. BaFFLe: Backdoor Detection via Feedback-based Federated Learning. In 41st IEEE International Conference on Distributed Computing Systems, ICDCS 2021, Washington DC, USA, July 7-10, 2021. IEEE, 852–863. https://doi.org/10.1109/ICDCS51616.2021.00086Google ScholarCross Ref
- Marco Barreno, Blaine Nelson, Russell Sears, Anthony D. Joseph, and J. D. Tygar. 2006. Can machine learning be secure?. In Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, ASIACCS 2006, Taipei, Taiwan, March 21-24, 2006, Ferng-Ching Lin, Der-Tsai Lee, Bao-Shuh Paul Lin, Shiuhpyng Shieh, and Sushil Jajodia (Eds.). ACM, 16–25. https://doi.org/10.1145/1128817.1128824Google ScholarDigital Library
- Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 119–129. https://proceedings.neurips.cc/paper/2017/hash/f4b9ec30ad9f68f89b29639786cb62ef-Abstract.htmlGoogle Scholar
- Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. CoRR abs/1712.05526 (2017). arXiv:1712.05526http://arxiv.org/abs/1712.05526Google Scholar
- Thomas Hiessl, Safoura Rezapour Lakani, Jana Kemnitz, Daniel Schall, and Stefan Schulte. 2022. Cohort-based federated learning services for industrial collaboration on the edge. J. Parallel Distributed Comput. 167 (2022), 64–76. https://doi.org/10.1016/j.jpdc.2022.04.021Google ScholarDigital Library
- Thomas Hiessl, Daniel Schall, Jana Kemnitz, and Stefan Schulte. 2020. Industrial Federated Learning - Requirements and System Design. CoRR abs/2005.06850 (2020). arXiv:2005.06850https://arxiv.org/abs/2005.06850Google Scholar
- Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. 2018. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. In 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, 21-23 May 2018, San Francisco, California, USA. IEEE Computer Society, 19–35. https://doi.org/10.1109/SP.2018.00057Google ScholarCross Ref
- Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018. Trojaning Attack on Neural Networks. In 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018. The Internet Society. http://wp.internetsociety.org/ndss/wp-content/uploads/sites/25/2018/02/ndss2018_03A-5_Liu_paper.pdfGoogle Scholar
- Yuzhe Ma, Xiaojin Zhu, and Justin Hsu. 2019. Data Poisoning against Differentially-Private Learners: Attacks and Defenses. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, Sarit Kraus (Ed.). ijcai.org, 4732–4738. https://doi.org/10.24963/ijcai.2019/657Google ScholarCross Ref
- Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA(Proceedings of Machine Learning Research, Vol. 54), Aarti Singh and Xiaojin (Jerry) Zhu (Eds.). PMLR, 1273–1282. http://proceedings.mlr.press/v54/mcmahan17a.htmlGoogle Scholar
- Felix Sattler, Klaus-Robert Müller, and Wojciech Samek. 2021. Clustered Federated Learning: Model-Agnostic Distributed Multitask Optimization Under Privacy Constraints. IEEE Trans. Neural Networks Learn. Syst. 32, 8 (2021), 3710–3722. https://doi.org/10.1109/TNNLS.2020.3015958Google ScholarCross Ref
- Zhou Su, Yuntao Wang, Tom H Luan, Ning Zhang, Feng Li, Tao Chen, and Hui Cao. 2021. Secure and efficient federated learning for smart grid with edge-cloud collaboration. IEEE Transactions on Industrial Informatics 18, 2 (2021), 1333–1344.Google ScholarCross Ref
- Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H. Brendan McMahan. 2019. Can You Really Backdoor Federated Learning?CoRR abs/1911.07963 (2019). arXiv:1911.07963http://arxiv.org/abs/1911.07963Google Scholar
- Shuang Wang, Xiaoqian Jiang, Yuan Wu, Lijuan Cui, Samuel Cheng, and Lucila Ohno-Machado. 2013. EXpectation Propagation LOgistic REgRession (EXPLORER): Distributed privacy-preserving online model learning. J. Biomed. Informatics 46, 3 (2013), 480–496. https://doi.org/10.1016/j.jbi.2013.03.008Google ScholarCross Ref
- Yuntao Wang, Zhou Su, Ning Zhang, Dongxiao Liu, Rui Xing, Tom H. Luan, and Xuemin Shen. 2022. A Survey on Metaverse: Fundamentals, Security, and Privacy. CoRR abs/2203.02662 (2022). https://doi.org/10.48550/arXiv.2203.02662 arXiv:2203.02662Google ScholarCross Ref
- Yuanshun Yao, Huiying Li, Haitao Zheng, and Ben Y. Zhao. 2019. Latent Backdoor Attacks on Deep Neural Networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, November 11-15, 2019, Lorenzo Cavallaro, Johannes Kinder, XiaoFeng Wang, and Jonathan Katz (Eds.). ACM, 2041–2055. https://doi.org/10.1145/3319535.3354209Google ScholarDigital Library
- Dong Yin, Yudong Chen, Kannan Ramchandran, and Peter L. Bartlett. 2018. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018(Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, 5636–5645. http://proceedings.mlr.press/v80/yin18a.htmlGoogle Scholar
- Dong Yin, Yudong Chen, Kannan Ramchandran, and Peter L. Bartlett. 2018. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018(Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, 5636–5645. http://proceedings.mlr.press/v80/yin18a.htmlGoogle Scholar
- Daniel Yue Zhang, Ziyi Kou, and Dong Wang. 2021. FedSens: A Federated Learning Approach for Smart Health Sensing with Class Imbalance in Resource Constrained Edge Computing. In 40th IEEE Conference on Computer Communications, INFOCOM 2021, Vancouver, BC, Canada, May 10-13, 2021. IEEE, 1–10. https://doi.org/10.1109/INFOCOM42981.2021.9488776Google ScholarDigital Library
- Lingchen Zhao, Jianlin Jiang, Bo Feng, Qian Wang, Chao Shen, and Qi Li. 2022. SEAR: Secure and Efficient Aggregation for Byzantine-Robust Federated Learning. IEEE Trans. Dependable Secur. Comput. 19, 5 (2022), 3329–3342. https://doi.org/10.1109/TDSC.2021.3093711Google ScholarCross Ref
- Zan Zhou, Xiaohui Kuang, Limin Sun, Lujie Zhong, and Changqiao Xu. 2020. Endogenous security defense against deductive attack: When artificial intelligence meets active defense for online service. IEEE Communications Magazine 58, 6 (2020), 58–64.Google ScholarCross Ref
- Zan Zhou, Changqiao Xu, Mingze Wang, Xiaohui Kuang, Yirong Zhuang, and Shui Yu. 2022. A Multi-shuffler Framework to Establish Mutual Confidence for Secure Federated Learning. IEEE Transactions on Dependable and Secure Computing (2022), 1–16. https://doi.org/10.1109/TDSC.2022.3215574Google ScholarDigital Library
- Zan Zhou, Changqiao Xu, Mingze Wang, Tengchao Ma, and Shui Yu. 2021. Augmented dual-shuffle-based moving target defense to ensure CIA-triad in federated learning. In 2021 IEEE Global Communications Conference (GLOBECOM). IEEE, 01–06.Google ScholarCross Ref
Index Terms
- Cohort-based Federated Learning Credit Evaluation Method in the Metaverse
Recommendations
Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection
AbstractThe Machine Learning-based Intrusion Detection System (ML-IDS) becomes more popular because it doesn't need to manually update the rules and can recognize variants better, However, due to the data privacy issue in ML-IDS, the Federated Learning-...
Bandit-based data poisoning attack against federated learning for autonomous driving models
AbstractIn Internet of Things (IoT) applications, federated learning is commonly used for distributedly training models in a privacy-preserving manner. Recently, federated learning is broadly applied to autonomous driving for training ...
A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning
The prosperity of machine learning has been accompanied by increasing attacks on the training process. Among them, poisoning attacks have become an emerging threat during model training. Poisoning attacks have profound impacts on the target models, e.g., ...
Comments