Authors:
Lucas Hugo
1
;
Philippe Feyel
2
and
David Saussié
1
Affiliations:
1
Robotics and Autonomous Systems Laboratory, Polytechnique Montréal, Montréal, Québec H3T1J4, Canada
;
2
Safran Electronics & Defense Canada, Montréal, Canada
Keyword(s):
Lyapunov Functions, Stability, Neural Network Optimization, Convexity, Region of Attraction.
Abstract:
The Lyapunov principle involves to find a positive Lyapunov function with a local minimum at the equilibrium point, whose time derivative is negative with a local maximum at that point. As a validation, it is usual to check the sign of the Hessian eigenvalues which can be complex: it requires to know a formal expression of the system dynamics, and especially a differentiable one. In order to circumvent this, we propose in this paper a scheme allowing to validate these functions without computing the Hessian. Two methods are proposed to force the convexity of the function near the equilibrium; one uses a neural single network to model the Lyapunov function, the other uses an additional one to approximate its time derivative. The training process is designed to maximize the region of attraction of the locally convex neural Lyapunov function trained. The use of examples allows us to validate the efficiency of this approach, by comparing it with the Hessian-based approach.