Loading [MathJax]/extensions/MathZoom.js
Time Varying optimization via Inexact Proximal Online Gradient Descent | IEEE Conference Publication | IEEE Xplore

Time Varying optimization via Inexact Proximal Online Gradient Descent


Abstract:

We consider the minimization of a time-varying function that comprises of a differentiable and a non-differentiable component. Such functions occur in the context of lear...Show More

Abstract:

We consider the minimization of a time-varying function that comprises of a differentiable and a non-differentiable component. Such functions occur in the context of learning and estimation problems, where the loss function is often differentiable and strongly convex, while the regularizer and the constraints translate to a non-differentiable penalty. Dynamic version of the proximal online gradient descent algorithm is designed that can handle errors in the gradient. The performance of the proposed algorithm is analyzed within the online convex optimization framework and bounds on the dynamic regret are developed. These bounds generalize the existing results on non-differentiable minimization. Further, the inexact results are generalized to propose online algorithms for large-scale problems where the full gradient cannot be calculated at every iteration. Instead, we put forth an online proximal stochastic variance reduced gradient descent algorithm that can work with sampled data. Tests on a robot formation control problem demonstrate the efficacy of the proposed algorithms.
Date of Conference: 28-31 October 2018
Date Added to IEEE Xplore: 21 February 2019
ISBN Information:

ISSN Information:

Conference Location: Pacific Grove, CA, USA

Contact IEEE to Subscribe

References

References is not available for this document.