Abstract:
In this article, we consider the misspecified optimization problem of minimizing a convex function f(x;\theta ^*) in x over a conic constraint set represented by $h(x...Show MoreMetadata
Abstract:
In this article, we consider the misspecified optimization problem of minimizing a convex function f(x;\theta ^*) in x over a conic constraint set represented by h(x;\theta ^*) \in \mathcal {K}, where \theta ^* is an unknown (or misspecified) vector of parameters, \mathcal {K} is a closed convex cone, and h is affine in x. Suppose that \theta ^* is unavailable but may be learnt by a separate process that generates a sequence of estimators \theta _k, each of which is an increasingly accurate approximation of \theta ^*. We develop a first-order inexact augmented Lagrangian (AL) scheme for computing an optimal solution x^* corresponding to \theta ^* while simultaneously learning \theta ^*. In particular, we derive rate statements for such schemes when the penalty parameter sequence is either constant or increasing and derive bounds on the overall complexity in terms of proximal gradient steps when AL subproblems are inexactly solved via an accelerated proximal gradient scheme. Numerical results for a portfolio optimization problem with a misspecified covariance matrix suggest that these schemes perform well in practice, while naive sequential schemes may perform poorly in comparison.
Published in: IEEE Transactions on Automatic Control ( Volume: 67, Issue: 8, August 2022)