Skip to main content

RIFL: A Fair Incentive Mechanism for Federated Learning

  • Conference paper
  • First Online:
Advanced Intelligent Computing Technology and Applications (ICIC 2024)

Abstract

Federated Learning (FL) is an innovative framework that enables workers to collaboratively train a global shared model in a decentralized manner. Instead of transferring raw data to a centralized location, workers train the shared model locally. However, participating in federated learning tasks consumes communication resources and computing power and poses privacy risks. Naturally, workers are reluctant to engage in training without reasonable rewards. Moreover, there is a risk of malicious workers submitting harmful local models to undermine the global model and gain undeserved rewards. To tackle these challenges, we propose RIFL, which can fairly motivate honest workers to participate in FL tasks and prevent malicious workers from corrupting the global shared model. We employ centered kernel alignment (CKA) to assess the similarity between the local models submitted by workers and the global model. Subsequently, we utilize a similarity clustering-based approach to identify and eliminate local models submitted by potentially malicious workers. Additionally, a reward allocation mechanism based on reputation and data contribution is designed to motivate workers with high-quality data to participate in FL tasks and prevent intermittent attackers from gaining undeserved rewards. Finally, extensive experiments on benchmark datasets show that RIFL achieves high fairness and robustness, improving global model accuracy and motivating workers with high-quality data to participate in FL tasks under non-IID and unreliable scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  2. Gao, L., Li, L., Chen, Y., Zheng, W., Xu, C., Xu, M.: FIFL: a fair incentive mechanism for federated learning. In: Proceedings of the 50th International Conference on Parallel Processing, pp. 1–10 (2021)

    Google Scholar 

  3. Qi, J., Lin, F., Chen, Z., Tang, C., Jia, R., Li, M.: High-quality model aggregation for blockchain-based federated learning via reputation-motivated task participation. IEEE Internet Things J. 9(19), 18378–18391 (2022)

    Article  Google Scholar 

  4. Deng, Y., et al.: Fair: quality-aware federated learning with precise user incentive and model aggregation. In: IEEE INFOCOM 2021-IEEE Conference on Computer Communications, pp. 1–10. IEEE (2021)

    Google Scholar 

  5. Xie, C., Koyejo, S., Gupta, I.: Zeno++: robust fully asynchronous sgd. In: International Conference on Machine Learning, pp. 10495–10503. PMLR (2020)

    Google Scholar 

  6. Fang, M., Cao, X., Jia, J., Gong, N.: Local model poisoning attacks to {Byzantine-Robust} federated learning. In: 29th USENIX Security Symposium (USENIX Security 20), pp. 1605–1622 (2020)

    Google Scholar 

  7. So, J., Güler, B., Avestimehr, A.S.: Byzantine-resilient secure federated learning. IEEE J. Sel. Areas Commun. 39(7), 2168–2181 (2020)

    Article  Google Scholar 

  8. Lyu, L., et al.: Towards fair and privacy-preserving federated deep models. IEEE Trans. Parallel Distrib. Syst. 31(11), 2524–2541 (2020)

    Article  Google Scholar 

  9. Sarikaya, Y., Ercetin, O.: Motivating workers in federated learning: a stackelberg game perspective. IEEE Networking Lett. 2(1), 23–27 (2019)

    Article  Google Scholar 

  10. Zeng, R., Zhang, S., Wang, J., Chu, X.: FMORE: an incentive scheme of multi-dimensional auction for federated learning in mec. In: 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS), pp. 278–288. IEEE (2020)

    Google Scholar 

  11. Kang, J., Xiong, Z., Niyato, D., Xie, S., Zhang, J.: Incentive mechanism for reliable federated learning: a joint optimization approach to combining reputation and contract theory. IEEE Internet Things J. 6(6), 10700–10714 (2019)

    Article  Google Scholar 

  12. Kornblith, S., Norouzi, M., Lee, H., Hinton, G.: Similarity of neural network representations revisited. In: International Conference on Machine Learning, pp. 3519–3529. PMLR (2019)

    Google Scholar 

  13. Zhou, Z., Tian, Y., Peng, C., Yang, N., Long, S.: VFLF: a verifiable federated learning framework against malicious aggregators in industrial internet of things. Concurr. Comput. Pract. Exp. 35(20), e7193 (2023)

    Article  Google Scholar 

  14. Wang, N., Xiao, Y., Chen, Y., Hu, Y., Lou, W., Hou, Y.T.: Flare: defending federated learning against model poisoning attacks via latent space representations. In: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, pp. 946–958 (2022)

    Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  16. Li, T., Hu, S., Beirami, A., Smith, V.: DITTO: fair and robust federated learning through personalization. In: International Conference on Machine Learning, pp. 6357–6368. PMLR (2021)

    Google Scholar 

  17. Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learning with adversaries: byzantine tolerant gradient descent. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  18. Zhang, J., Wu, Y., Pan, R.: Incentive mechanism for horizontal federated learning based on reputation and reverse auction. In: Proceedings of the Web Conference 2021, pp. 947–956 (2021)

    Google Scholar 

Download references

Acknowledgments

This work was supported by Key Projects of the Ministry of Science and Technology of the People’s Republic of China (2020YFC0832405) and High-Level Talent Aggregation Project in Hunan Province, China-Innovation Team (2019RS1060).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinghai Liao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tang, H., Liao, X., Ouyang, J. (2024). RIFL: A Fair Incentive Mechanism for Federated Learning. In: Huang, DS., Zhang, X., Zhang, Q. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science(), vol 14875. Springer, Singapore. https://doi.org/10.1007/978-981-97-5663-6_31

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-5663-6_31

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-5662-9

  • Online ISBN: 978-981-97-5663-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics