Abstract:
As many countries have promulgated laws to protect users’ data privacy, how to legally use users’ data has become a hot topic. With the emergence of federated learning (F...Show MoreMetadata
Abstract:
As many countries have promulgated laws to protect users’ data privacy, how to legally use users’ data has become a hot topic. With the emergence of federated learning (FL) (also known as collaborative learning), multiple participants can create a common, robust, and secure machine learning model while addressing key issues in data sharing, such as privacy, security, accessibility, etc. Unfortunately, existing research shows that FL is not as secure as it claims, gradient leakage and the correctness of aggregation results are still key problems. Recently, some scholars try to address these security problems in FL by cryptography and verification techniques. However, there are some issues in this scheme that remain unsolved. First, some solutions cannot guarantee the correctness of the aggregation results. Second, existing state-of-the-art FL schemes have a costly computational and communication overhead. In this article, we propose SVFLC, a secure and verifiable FL scheme with chain aggregation to solve these problems. We first design a privacy-preserving method that can solve the problem of gradient leakage and defend against collusion attacks by semi-honest users. Then, we create a verifiable method based on a homomorphic hash function, which can ensure the correctness of the weighted aggregation results. Besides, the SVFLC can also track users who encounter calculation errors during the aggregation process. Additionally, the extensive experiment results on real-world data sets demonstrate that the SVFLC is efficient, compared with other solutions.
Published in: IEEE Internet of Things Journal ( Volume: 11, Issue: 8, 15 April 2024)