Analysis of Trainability of Gradient-based Multi -environment Learning from Gradient Norm Regularization Perspective | IEEE Conference Publication | IEEE Xplore

Analysis of Trainability of Gradient-based Multi -environment Learning from Gradient Norm Regularization Perspective


Abstract:

Adaptation and invariance to multiple environments are both crucial abilities for intelligent systems. Model-agnostic meta-learning (MAML) is a meta-learning algorithm to...Show More

Abstract:

Adaptation and invariance to multiple environments are both crucial abilities for intelligent systems. Model-agnostic meta-learning (MAML) is a meta-learning algorithm to enable such adaptability, and invariant risk minimization (IRM) is a problem setting to achieve the invariant representation across multiple environments. We can formulate both methods as optimization problems with the environment-dependent constraint and this constraint is known to hamper optimization. Therefore, understanding the effect of the constraint on the optimization is important. In this paper, we provide a conceptual insight on how the constraint affects the optimization of MAML and IRM by analyzing the trainability of the gradient descent on the loss with the gradient norm penalty, which is easier to study but is related to both MAML and IRM. We conduct numerical experiments with practical datasets and architectures for MAML and IRM and validate that the analysis of the gradient norm penalty loss captures well the empirical relationship between the constraint and the trainability of MAML and IRM.
Date of Conference: 18-22 July 2021
Date Added to IEEE Xplore: 21 September 2021
ISBN Information:

ISSN Information:

Conference Location: Shenzhen, China

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.