Skip to main content

Advertisement

Communication-efficient federated learning based on compressed sensing and ternary quantization

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Most existing work on Federated Learning (FL) transmits full-precision weights, which contain a significant amount of redundant information, leading to a substantial communication burden. This issue is particularly pronounced with the growing prevalence of smart mobile and Internet of Things (IoT) devices, where data sharing generates a large communication cost. To address this issue, we propose a communication-efficient Federated Learning algorithm, FedCSTQ, based on compressed sensing (CS) and ternary quantization.FedCSTQ introduces a heuristic sparsification method that enhances information selection, thereby mitigating the accuracy degradation typically associated with CS. Additionally, the algorithm incorporates ternary quantization to process residuals after sparsity, further reducing the impact of accuracy degradation due to sparsity while guaranteeing a small amount of communication overhead. Experiments conducted on the publicly available datasets reveal that FedCSTQ outperforms the standard FL (FedAvg), SignSGD with a majority vote, FL using dithering(CEP-FL), and FL based on Compressed Sensing (CS-FL). Ablation studies further demonstrate the effectiveness of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Algorithm 2
Algorithm 3
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Availability of data and materials

The MNIST dataset is available at: http://yann.lecun.com/exdb/mnist/. The Fashion-MNIST dataset is available at: https://github.com/zalandoresearch/fashion-mnist. The CIFAR-10 dataset is available at: https://www.cs.toronto.edu/~kriz/cifar.html.

References

  1. Cauffman C, Goanta C (2021) A new order: The digital services act and consumer protection. Eur J Risk Regul 12(4):758–774

    Article  Google Scholar 

  2. McMahan HB, Moore E, Ramage D et al (2016) Federated learning of deep networks using model averaging. arXiv preprint arXiv:1602.05629 2:2

  3. Hamer J, Mohri M, Suresh AT (2020) Fedboost: A communication-efficient algorithm for federated learning. In: International Conference on Machine Learning, PMLR, pp 3973–3983

  4. Cook J, Rehman SU, Khan MA (2023) Security and privacy for low power iot devices on 5g and beyond networks: Challenges and future directions. IEEE Access 11:39295–39317. https://doi.org/10.1109/ACCESS.2023.3268064

    Article  Google Scholar 

  5. Ji Y, Chen L (2022) Fedqnn: A computation-communication-efficient federated learning framework for iot with low-bitwidth neural network quantization. IEEE Int Things J 10(3):2494–2507

    Article  MATH  Google Scholar 

  6. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theor 52(4):1289–1306

    Article  MathSciNet  MATH  Google Scholar 

  7. Bernstein J, Zhao J, Azizzadenesheli K et al (2018a) signsgd with majority vote is communication efficient and fault tolerant. arXiv preprint arXiv:1810.05291

  8. Bernstein J, Wang YX, Azizzadenesheli K et al (2018b) signsgd: Compressed optimisation for non-convex problems. In: International Conference on Machine Learning, PMLR, pp 560–569

  9. Li Q, Diao Y, Chen Q et al (2022) Federated learning on non-iid data silos: An experimental study. In: 2022 IEEE 38th International Conference on Data Engineering (ICDE), pp 965–978, https://doi.org/10.1109/ICDE53745.2022.00077

  10. Sattler F, Wiedemann S, Müller KR, et al (2020) Robust and communication-efficient federated learning from non-i.i.d. data. IEEE Trans Neural Netw Learn Syst 31(9):3400–3413. https://doi.org/10.1109/TNNLS.2019.2944481

  11. Xu J, Du W, Jin Y et al (2022) Ternary compression for communication-efficient federated learning. IEEE Trans Neural Netw Learn Syst 33(3):1162–1176. https://doi.org/10.1109/TNNLS.2020.3041185

    Article  MathSciNet  MATH  Google Scholar 

  12. Hasırcıoğlu B, Gündüz D (2024) Communication efficient private federated learning using dithering. In: ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 7575–7579, https://doi.org/10.1109/ICASSP48485.2024.10446222

  13. Liu Y, Chang S, Liu Y (2023) Fedcs: Communication-efficient federated learning with compressive sensing. In: 2022 IEEE 28th International Conference on Parallel and Distributed Systems (ICPADS), pp 17–24, https://doi.org/10.1109/ICPADS56603.2022.00011

  14. Li C, Li G, Varshney PK (2021) Communication-efficient federated learning based on compressed sensing. IEEE Int Things J 8(20):15531–15541. https://doi.org/10.1109/JIOT.2021.3073112

    Article  MATH  Google Scholar 

  15. Daubechies I, DeVore R, Fornasier M et al (2010) Iteratively reweighted least squares minimization for sparse recovery. Commun Pure Appl Math A J Issued Courant Inst Math Sci 63(1):1–38

    Article  MathSciNet  MATH  Google Scholar 

  16. Needell D, Tropp JA (2009) Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal 26(3):301–321

    Article  MathSciNet  MATH  Google Scholar 

  17. Wei E, Ozdaglar A (2012) Distributed alternating direction method of multipliers. In: 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), IEEE, pp 5445–5450

  18. Blumensath T, Davies ME (2008) Iterative thresholding for sparse approximations. J Fourier Anal Appl 14:629–654

    Article  MathSciNet  MATH  Google Scholar 

  19. Sharbaf MS (2022) Iot driving new business model, and iot security, privacy, and awareness challenges. In: 2022 IEEE 8th World Forum on Internet of Things (WF-IoT), IEEE, pp 1–4

  20. Oh Y, Lee N, Jeon YS et al (2023) Communication-efficient federated learning via quantized compressed sensing. IEEE Trans Wirel Commun 22(2):1087–1100. https://doi.org/10.1109/TWC.2022.3201207

    Article  MATH  Google Scholar 

  21. Fan X, Wang Y, Huo Y et al (2021) Communication-efficient federated learning through 1-bit compressive sensing and analog aggregation. In: 2021 IEEE International Conference on Communications Workshops (ICC Workshops), pp 1–6, https://doi.org/10.1109/ICCWorkshops50388.2021.9473872

  22. Guo Y, Yao A, Chen Y (2016) Dynamic network surgery for efficient dnns. Adv Neural Inf Process Syst 29

  23. Li H, Kadav A, Durdanovic I et al (2016) Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710

  24. Deng L (2012) The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process Mag 29(6):141–142

  25. Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747

  26. Krizhevsky A (2012) Learning multiple layers of features from tiny images. University of Toronto

  27. Mu Y, Liu W, Liu X et al (2016) Stochastic gradient made stable: A manifold propagation approach for large-scale optimization. IEEE Trans Knowl Data Eng 29(2):458–471

    Article  MATH  Google Scholar 

  28. Zhu L, Han S (2020) Deep Leakage from Gradients, Springer International Publishing, Cham, pp 17–31. https://doi.org/10.1007/978-3-030-63076-8_2

Download references

Funding

This research work was supported by the National Natural Science Foundation of China under Grant 62366004 and the Guangxi Key Technologies R&D Program under Grant AB24010316.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiali Zheng.

Ethics declarations

Conflict of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Ethics approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, J., Tang, J. Communication-efficient federated learning based on compressed sensing and ternary quantization. Appl Intell 55, 100 (2025). https://doi.org/10.1007/s10489-024-05979-w

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-05979-w

Keywords