Abstract:
It is known that for a strictly concave-convex function, the gradient method introduced by Arrow and Hurwicz [1], has guaranteed global convergence to its saddle point. N...Show MoreMetadata
Abstract:
It is known that for a strictly concave-convex function, the gradient method introduced by Arrow and Hurwicz [1], has guaranteed global convergence to its saddle point. Nevertheless, there are classes of problems where the function considered is not strictly concave-convex, in which case convergence to a saddle point is not guaranteed. In the paper we provide a characterization of the asymptotic behaviour of the gradient method, in the general case where this is applied to a general concave-convex function. We prove that for any initial conditions the gradient method is guaranteed to converge to a trajectory described by an explicit linear ODE. We further show that this result has a natural extension to subgradient methods, where the dynamics are constrained in a prescribed convex set. The results are used to provide simple characterizations of the limiting solutions for special classes of optimization problems, and modifications of the problem so as to avoid oscillations are also discussed.
Published in: 53rd IEEE Conference on Decision and Control
Date of Conference: 15-17 December 2014
Date Added to IEEE Xplore: 12 February 2015
ISBN Information:
Print ISSN: 0191-2216