Skip to main content

Advertisement

Unsupervised perturbation based self-supervised federated adversarial training

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Similar to traditional machine learning, federated learning is susceptible to adversarial attacks. Existing defense methods against federated attacks often rely on extensive labeling during the local training process to enhance model robustness. However, labeling typically requires significant resources. To address the challenges posed by expensive labeling and the robustness issues in federated learning, we propose the Unsupervised Perturbation based Self-Supervised Federated Adversarial Training (UPFAT) framework. Within local clients, we introduce an innovative unsupervised adversarial sample generation method, which adapts the classical self-supervised framework BYOL (Bootstrap Your Own Latent). This method maximizes the distances between embeddings of various transformations of the same input, generating unsupervised adversarial samples aimed at confusing the model. For model communication, we present the Robustness-Enhanced Moving Average (REMA) module, which adaptively utilizes global model updates based on the local model’s robustness.Extensive experiments demonstrate that UPFAT outperforms existing methods by \(\varvec{3\sim 4\%}\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Algorithm 2
Algorithm 3
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Availability of Data and Material

The datasets generated during and analyzed during the current study are available from the corresponding author upon request.

Code Availability

The code implemented during the current study is available from the corresponding author upon request.

References

  1. McMahan HB, Moore E, Ramage D et al (2016) Communication-efficient learning of deep networks from decentralized data. In: International conference on artificial intelligence and statistics

  2. Tan Y, Long G, Liu L et al (2022) Fedproto: federated prototype learning across heterogeneous clients. Proc AAAI Conf Artif Intell 36:8432–8440. https://doi.org/10.1609/aaai.v36i8.20819

    Article  Google Scholar 

  3. Zhang J, Li Z, Li B et al (2022) Federated learning with label distribution skew via logits calibration. In: Chaudhuri K, Jegelka S, Song L et al (eds) Proceedings of the 39th international conference on machine learning, proceedings of machine learning research, vol 162. PMLR, pp 26311–26329. https://proceedings.mlr.press/v162/zhang22p.html

  4. Tan Y, Long G, Ma J et al (2022) Federated learning from pre-trained models: a contrastive learning approach. In: Koyejo S, Mohamed S, Agarwal A et al (eds) Advances in neural information processing systems, vol 35. Curran Associates, Inc., pp 19332–19344. https://proceedings.neurips.cc/paper_files/paper/2022/file/7aa320d2b4b8f6400b18f6f77b6c1535-Paper-Conference.pdf

  5. Lyu L, Yu H, Ma X et al (2022) Privacy and robustness in federated learning: attacks and defenses. IEEE Trans Neural Netw Learn Syst, pp 1–21. https://doi.org/10.1109/TNNLS.2022.3216981

  6. Zizzo G, Rawat A, Sinn M et al (2020) FAT: federated adversarial training. CoRR abs/2012.01791. https://arxiv.org/abs/2012.01791, arXiv:2012.01791

  7. Hong J, Wang H, Wang Z, et al (2023) Federated robustness propagation: sharing adversarial robustness in federated learning. In: AAAI

  8. Yang Q, Liu Y, Chen T et al (2019) Federated machine learning: concept and applications. ACM Trans Intell Syst Technol (TIST) 10(2):1–19

    Article  MATH  Google Scholar 

  9. Madry A, Makelov A, Schmidt L et al (2018) Towards deep learning models resistant to adversarial attacks. In: 6th International conference on learning representations. ICLR 2018 - Conference Track Proceedings

  10. Zhang H, Yu Y, Jiao J et al (2019) Theoretically principled trade-off between robustness and accuracy. In: International conference on machine learning. PMLR, pp 7472–7482

  11. Chen C, Zhang J, Xu X et al (2022) Decision boundary-aware data augmentation for adversarial training. IEEE Trans Dependable Secure Comput

  12. Wang D, Jin W, Wu Y et al (2023) Atgan: adversarial training-based gan for improving adversarial robustness generalization on image classification. Appl Intell, pp 1–17

  13. Carmon Y, Raghunathan A, Schmidt L et al (2019) Unlabeled data improves adversarial robustness. In: Proceedings of the 33rd international conference on neural information processing systems, pp 11192–11203

  14. Zhang J, Zhu J, Niu G et al (2021) Geometry-aware instance-reweighted adversarial training. In: ICLR

  15. Tsipras D, Santurkar S, Engstrom L et al (2018) Robustness may be at odds with accuracy. In: International conference on learning representations

  16. Croce F, Hein M (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International conference on machine learning, pp 2206–2216

  17. Li T, Sahu AK, Zaheer M et al (2020) Federated optimization in heterogeneous networks. Proc Mach Learn Syst 2:429–450

    MATH  Google Scholar 

  18. Zhu J, Yao J, Liu T et al (2023) Combating exacerbated heterogeneity for robust models in federated learning. In: The Eleventh international conference on learning representations. https://openreview.net/forum?id=eKllxpLOOm

  19. Chen C, Liu Y, Ma X et al (2022) Calfat: calibrated federated adversarial training with label skewness. In: Advances in neural information processing systems

  20. Wang T, Isola P (2020) Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In: ICML

  21. Jiang Z, Chen T, Chen T et al (2020) Robust pre-training by adversarial contrastive learning. In: NeurIPS

  22. Fan L, Liu S, Chen PY et al (2021) When does contrastive learning preserve adversarial robustness from pretraining to finetuning? In: NeurIPS

  23. Kim M, Tack J, Hwang SJ (2020) Adversarial self-supervised contrastive learning. Adv Neural Inf Process Syst 33:2983–2994

    Google Scholar 

  24. Zhuang W, Wen Y, Zhang S (2022) Divergence-aware federated self-supervised learning. In: International conference on learning representations. https://openreview.net/forum?id=oVE1z8NlNe

  25. Zhuang W, Gan X, Wen Y et al (2021) Collaborative unsupervised visual representation learning from decentralized data. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 4912–4921

  26. van Berlo B, Saeed A, Ozcelebi T (2020) Towards federated unsupervised representation learning. In: Proceedings of the third ACM international workshop on edge systems, analytics and networking, pp 31–36

  27. Zhang F, Kuang K, Chen L et al (2023) Federated unsupervised representation learning. Front Inf Technol Electron Eng 24(8):1181–1193

    Article  MATH  Google Scholar 

  28. Grill JB, Strub F, Altché F et al (2020) Bootstrap your own latent-a new approach to self-supervised learning. Adv Neural Inf Process Syst 33:21271–21284

    Google Scholar 

  29. Li S, Mao Y, Li J et al (2023) Fedutn: federated self-supervised learning with updating target network. Appl Intell 53(9):10879–10892

    Article  MATH  Google Scholar 

  30. Zhang C, Zhang K, Zhang C et al (2022) Decoupled adversarial contrastive learning for self-supervised adversarial robustness. In: European conference on computer vision. Springer, pp 725–742

  31. Wu Z, Xiong Y, Yu SX et al (2018) Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3733–3742

  32. Wang X, Zhang R, Shen C et al (2021) Dense contrastive learning for self-supervised visual pre-training. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3024–3033

  33. Zhuang C, Zhai AL, Yamins D (2019) Local aggregation for unsupervised learning of visual embeddings. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 6002–6012

  34. Henaff O (2020) Data-efficient image recognition with contrastive predictive coding. In: International conference on machine learning. PMLR, pp 4182–4192

  35. Tian Y, Krishnan D, Isola P (2020) Contrastive multiview coding. In: Computer vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16. Springer, pp 776–794

  36. Yeh CH, Hong CY, Hsu YC et al (2022) Decoupled contrastive learning. In: European conference on computer vision. Springer, pp 668–684

  37. Chen T, Kornblith S, Norouzi M et al (2020) A simple framework for contrastive learning of visual representations. In: International conference on machine learning. PMLR, pp 1597–1607

  38. Li T, Sahu AK, Talwalkar A et al (2020) Federated learning: challenges, methods, and future directions. IEEE Signal Process Mag 37(3):50–60

    Article  MATH  Google Scholar 

  39. Luo J, Wu X, Luo Y et al (2019) Real-world image datasets for federated learning. arXiv:1910.11089. https://api.semanticscholar.org/CorpusID:204852365

  40. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images. Technical report

  41. Netzer Y, Wang T, Coates A et al (2011) Reading digits in natural images with unsupervised feature learning. NIPS

  42. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  43. Kolesnikov A, Zhai X, Beyer L (2019) Revisiting self-supervised visual representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1920–1929

  44. Wong E, Rice L, Kolter JZ (2019) Fast is better than free: revisiting adversarial training. In: International conference on learning representations

  45. Kurakin A, Goodfellow I, Bengio S (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236

  46. Andriushchenko M, Croce F, Flammarion N et al (2020) Square attack: a query-efficient black-box adversarial attack via random search. In: European conference on computer vision. Springer, pp 484–501

  47. Rahmati A, Moosavi-Dezfooli SM, Frossard P et al (2020) Geoda: a geometric framework for black-box adversarial attacks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8446–8455

  48. Croce F, Hein M (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International conference on machine learning, pp 2206–2216

  49. Wang Y, Zou D, Yi J et al (2019) Improving adversarial robustness requires revisiting misclassified examples. In: International conference on learning representations

  50. He K, Fan H, Wu Y et al (2020) Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9729–9738

  51. Chen X, He K (2021) Exploring simple siamese representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 15750–15758

Download references

Funding

This work was supported by National Natural Science Foundation of China (62376151), Shanghai Science and Technology Commission (22DZ2205600).

Author information

Authors and Affiliations

Authors

Contributions

YuYue Zhang completed the primary sections of the manuscript and led the main parts of the subsequent revisions. XiaoLi Zhao assisted with the design and structuring of the initial draft and provided feedback and support during the subsequent revisions. HanChen Ye contributed to the design of the initial draft.

Corresponding author

Correspondence to Xiaoli Zhao.

Ethics declarations

Ethics Approval

Not applicable.

Competing Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Ye, H. & Zhao, X. Unsupervised perturbation based self-supervised federated adversarial training. Appl Intell 55, 177 (2025). https://doi.org/10.1007/s10489-024-05938-5

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-05938-5

Keywords