Mutual Gradient Inversion: Unveiling Privacy Risks of Federated Learning on Multi-Modal Signals | IEEE Journals & Magazine | IEEE Xplore

Mutual Gradient Inversion: Unveiling Privacy Risks of Federated Learning on Multi-Modal Signals


Abstract:

Federated Learning (FL) preserves privacy by training a global model via gradient exchange between the parameter server and local clients rather than raw data sharing. Ed...Show More

Abstract:

Federated Learning (FL) preserves privacy by training a global model via gradient exchange between the parameter server and local clients rather than raw data sharing. Edge devices with sensors serve as local clients receiving multimodal signals and contributing multimodal data for training. Despite the privacy-centric design of FL, it remains vulnerable to gradient leakage attacks. However, existing studies predominantly focus on single-modality data recovery from gradients, leaving a critical research void in multimodal data scenarios. In this letter, we propose MGIS: Mutual Gradient Inversion Strategy, the first gradient inversion attack and defense paradigm dealing with multimodal data. Inspired by knowledge distillation, MGIS utilizes common information (e.g., labels) between different modalities to extract multimodal data from gradients. Experimental results demonstrate that MGIS outperforms single-modality gradient attacks in the quality of privacy data recovery and highlight the increased privacy leakage risk associated with multi-modality data compared to single-modality data.
Published in: IEEE Signal Processing Letters ( Volume: 31)
Page(s): 2745 - 2749
Date of Publication: 02 September 2024

ISSN Information:


Contact IEEE to Subscribe

References

References is not available for this document.