Loading [a11y]/accessibility-menu.js
Contrastive-Enhanced Domain Generalization With Federated Learning | IEEE Journals & Magazine | IEEE Xplore

Contrastive-Enhanced Domain Generalization With Federated Learning


Impact Statement:The proposed FedCDG framework addresses DG in the federated learning paradigm. A global model is optimized in a collaborative effort without centralizing data from source...Show More

Abstract:

Domain generalization (DG) aims to train a global model from different but related domains, which can be generalized to an unseen out-of-distribution domain. Most existin...Show More
Impact Statement:
The proposed FedCDG framework addresses DG in the federated learning paradigm. A global model is optimized in a collaborative effort without centralizing data from source domains and an improved instance normalization module as well as a prototype-based contrastive loss is proposed to enhance local model generality. Overall, the proposed FedCDG can mitigate the risk of data leakage while achieving comparable performance compared to centralized learning and can be extended to other non-IID settings (e.g., semisupervised DG).

Abstract:

Domain generalization (DG) aims to train a global model from different but related domains, which can be generalized to an unseen out-of-distribution domain. Most existing DG methods are based on the centralized learning paradigm, raising the privacy leakage concern. In this article, we propose a contrastive-enhanced domain generalization framework in the federated learning paradigm, where there are a server and multiple clients. Each client owns data from one domain and builds a local model consisting of a domain-invariant feature extractor and a classifier head. The server generates a global model through aggregating and broadcasting local models' parameters, thus achieving knowledge sharing and keeping data confidential. To enhance the discrimination and generalization ability of the local model, we build an improved instance normalization module that focuses on task-relevant features with less domain-specific information. Moreover, for better classwise alignment in the embedding space, we propose a prototype-based contrastive loss. Given the limited annotation budget in practice, we also extend the proposed framework into the semisupervised DG setting (i.e., only ten labeled samples per class). Experimental results on three benchmarks and different backbones show that the proposed framework yields promising performances for both DG and semisupervised DG in the federated learning paradigm.
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 4, April 2024)
Page(s): 1525 - 1532
Date of Publication: 24 July 2023
Electronic ISSN: 2691-4581

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.