Skip to main content

Budget-Bounded Incentives for Federated Learning

  • Chapter
  • First Online:
Federated Learning

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12500))

Abstract

We consider federated learning settings with independent, self-interested participants. As all contributions are made privately, participants may be tempted to free-ride and provide redundant or low-quality data while still enjoying the benefits of the FL model. In Federated Learning, this is especially harmful as low-quality data can degrade the quality of the FL model.

Free-riding can be countered by giving incentives to participants to provide truthful data. While there are game-theoretic schemes for rewarding truthful data, they do not take into account redundancy of data with previous contributions. This creates arbitrage opportunities where participants can gain rewards for redundant data, and the federation may be forced to pay out more incentives than justified by the value of the FL model.

We show how a scheme based on influence can both guarantee that the incentive budget is bounded in proportion to the value of the FL model, and that truthfully reporting data is the dominant strategy of the participants. We show that under reasonable conditions, this result holds even when the testing data is provided by participants.

Supported by EPFL.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    It is straightforward to extend the results in this chapter to a setting where increased effort results in increased quality, but this would require to characterize the exact relation which depends on the application.

References

  1. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), Article 12 (2019). 19 p. https://doi.org/10.1145/3298981

  2. Liu, Y., Wei, J.: Incentives for federated learning: a hypothesis elicitation approach. In: ICML Workshop on Incentives in Machine Learning (2020)

    Google Scholar 

  3. Yu, H., et al.: A sustainable incentive scheme for federated learning. IEEE Intell. Syst. 35(4) (2020). https://doi.org/10.1109/MIS.2020.2987774

  4. Richardson, A., Filos-Ratsikas, A., Rokvic, L., Faltings, B.: Privately computing influence in regression models. In: AAAI 2020 Workshop on Privacy-Preserving Artificial Intelligence (2020)

    Google Scholar 

  5. Cai, Y., Daskalakis, C., Papadimitriou, C.: Optimum statistical estimation with strategic data sources. In: Grünwald, P., Hazan, E., Kale, S., (eds.) Proceedings of The 28th Conference on Learning Theory, Proceedings of Machine Learning Research, Paris, France, vol. 40, pp. 280–296. PMLR (2015)

    Google Scholar 

  6. Caragiannis, I., Procaccia, A., Shah, N.: Truthful univariate estimators. In: International Conference on Machine Learning, pp. 127–135 (2016)

    Google Scholar 

  7. Chen, Y., Immorlica, N., Lucier, B., Syrgkanis, V., Ziani, J.: Optimal data acquisition for statistical estimation. In: Proceedings of the 2018 ACM Conference on Economics and Computation, pp. 27–44. ACM (2018)

    Google Scholar 

  8. Chen, Y., Podimata, C., Procaccia, A.D., Shah, N.: Strategyproof linear regression in high dimensions. In: Proceedings of the 2018 ACM Conference on Economics and Computation, pp. 9–26. ACM (2018)

    Google Scholar 

  9. Cook, R.D., Weisberg, S.: Characterizations of an empirical influence function for detecting influential cases in regression. Technometrics 22(4), 495–508 (1980)

    Article  MathSciNet  Google Scholar 

  10. Dwork, C.: Differential privacy: a survey of results. In: Agrawal, M., Du, D., Duan, Z., Li, A. (eds.) TAMC 2008. LNCS, vol. 4978, pp. 1–19. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79228-4_1

    Chapter  MATH  Google Scholar 

  11. Faltings, B., Radanovic, G.: Game theory for data science: eliciting truthful information. In: Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 11, no. 2, pp. 1–151 (2017)

    Google Scholar 

  12. Faltings, B., Jurca, R., Radanovic, G.: Peer truth serum: incentives for crowdsourcing measurements and opinions. CoRR abs/1704.05269 (2017)

    Google Scholar 

  13. Jia, R., et al.: Towards efficient data valuation based on the Shapley value. In: Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS) (2019)

    Google Scholar 

  14. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, International Convention Centre, Sydney, Australia, vol. 70, pp. 1885–1894. PMLR (2017)

    Google Scholar 

  15. Loog, M., Viering, T., Mey, A.: Minimizers of the empirical risk and risk monotonicity. In: Advances in Neural Information Processing Systems, pp. 7476–7485 (2019)

    Google Scholar 

  16. Radanovic, G., Faltings, B., Jurca, R.: Incentives for effort in crowdsourcing using the peer truth serum. ACM Trans. Intell. Syst. Technol. (TIST) 7(4), 48 (2016)

    Google Scholar 

  17. Shnayder, V., Agarwal, A., Frongillo, R., Parkes, D.C.: Informed truthfulness in multi-task peer prediction. In: Proceedings of the 2016 ACM Conference on Economics and Computation, pp. 179–196 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boi Faltings .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Richardson, A., Filos-Ratsikas, A., Faltings, B. (2020). Budget-Bounded Incentives for Federated Learning. In: Yang, Q., Fan, L., Yu, H. (eds) Federated Learning. Lecture Notes in Computer Science(), vol 12500. Springer, Cham. https://doi.org/10.1007/978-3-030-63076-8_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-63076-8_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-63075-1

  • Online ISBN: 978-3-030-63076-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics