Abstract:
Distributionally Robust Optimization (DRO) mitigates the effect of distributional uncertainty in supervised learning by optimizing over an uncertainty ball of distributio...Show MoreMetadata
Abstract:
Distributionally Robust Optimization (DRO) mitigates the effect of distributional uncertainty in supervised learning by optimizing over an uncertainty ball of distributions, typically centered around the empirical distribution of the training sample. In this paper, we consider DRO for the problem of Unsupervised Domain Adaptation (UDA). In classical UDA, the goal is to adapt a model trained on a labeled source domain to a new, unlabeled target domain. Modifying classical DRO to UDA settings by enlarging the uncertainty radius around the source to include the target can lead to excessive regularization. To mitigate this, we propose to utilize Optimal Transport (OT) to transport the source domain to a vicinity of the target and construct the DRO problem around the transported samples, thereby ensuring a small uncertainty radius in DRO with high likelihood of including the true target. Our numerical experiments validate the superiority of our method over existing robust approaches.
Published in: 2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP)
Date of Conference: 22-25 September 2024
Date Added to IEEE Xplore: 04 November 2024
ISBN Information: