A one-layer recurrent neural network for constrained nonconvex optimization☆
Introduction
In this paper, the following constrained nonconvex minimization problem is considered: where is the decision vector; and are continuously differentiable functions, but not necessarily convex. The feasible region is assumed to be a nonempty set. We denote by the set of global solutions of problem (1) as,
Many problems in engineering applications can be formulated as dynamic optimization problems such as kinematic control of redundant robot manipulators (Wang, Hu, & Jiang, 1999), nonlinear model predictive control (Piche et al., 2000, Yan and Wang, 2012), hierarchical control of interconnected dynamic systems (Hou, Gupta, Nikiforuk, Tan, & Cheng, 2007), compressed sensing in adaptive signal processing (Balavoine, Romberg, & Rozell, 2012), and so on. For example, real-time motion planning and control of redundant robot manipulators can be formulated as constrained dynamic optimization problems with nonconvex objective functions for simultaneously minimizing kinetic energy and maximizing manipulability. Similarly, in nonlinear and robust model predictive control, optimal control commands have to be computed with a moving time window by repetitively solving constrained optimization problems with nonconvex objective functions for error and control variation minimization, and robustness maximization. The difficulty of dynamic optimization is significantly amplified when the optimal solutions have to be obtained in real time, especially in the presence of uncertainty. In such applications, compared with traditional numerical optimization algorithms, neurodynamic optimization approaches based on recurrent neural networks have several unique advantages. Recurrent neural networks can be physically implemented in designated hardware/firmware, such as very-large-scale integration (VLSI) reconfigurable analog chips, optical chips, graphic processing units (GPU), field programmable gate array (FPGA), digit signal processor (DSP), and so on. Recent technological advances make the design and implementation of neural networks more feasible at a more reasonable cost (Asai, Kanazawa, & Amemiya, 2003).
Since the pioneering work of Hopfield neural networks (Hopfield and Tank, 1985, Tank and Hopfield, 1986), neurodynamic optimization has achieved great success in the past three decades. For example, a deterministic annealing neural network was proposed for solving convex programming problems (Wang, 1994), a Lagrangian network was developed for solving convex optimization problems with linear equality constraints based on the Lagrangian optimality conditions (Xia, 2003), the primal–dual network (Xia, 1996), the dual network (Xia, Feng, & Wang, 2004), and the simplified dual network (Liu & Wang, 2006) were developed for solving convex optimization problems based on the Karush–Kuhn–Tucker optimality conditions, projection neural networks were developed for constrained optimization problems based on the projection method (Gao, 2004, Hu and Wang, 2007, Liu et al., 2010, Xia et al., 2002). In recent years, neurodynamic optimization approaches have been extended to nonconvex and generalized convex optimization problems. For example, a Lagrangian neural network was proposed for nonsmooth convex optimization by using the Lagrangian saddle-point theorem (Cheng et al., 2011), a recurrent neural network with global attractivity was proposed for solving nonsmooth convex optimization problems (Bian & Xue, 2013), several neural networks were developed for nonsmooth pseudoconvex or quasiconvex optimization using the Clarke’s generalized gradient (Guo et al., 2011, Hosseini et al., 2013, Hu and Wang, 2006, Liu et al., 2012, Liu and Wang, 2013). In addition, various neural networks with finite-time convergence property were developed (Bian and Xue, 2009, Forti et al., 2004, Forti et al., 2006, Xue and Bian, 2008).
Despite the enormous success, neurodynamic optimization approaches would reach their solvability limits at constrained optimization problems with unimodal objective functions and are important for global optimization with general nonconvex objective functions. Little progress has been made on nonconvex optimization in the neural network community. Instead of seeking global optimal solutions, a more attainable and meaningful goal is to design neural networks for searching critical points (e.g., Karush–Kuhn–Tucker points) of nonconvex optimization problems. Xia, Feng, and Wang (2008) proposed a neural network for solving nonconvex optimization problems with inequality constraints, whose equilibrium points correspond to the KKT points. But the condition that the Hessian matrix of the associated Lagrangian function is positive semidefinite for the global convergence is too strong. In this paper, a one-layer recurrent neural network based on an exact penalty function method is proposed for searching KKT points of nonconvex optimization problems with inequality constraints. The contribution of this paper can be summarized as follows. (1) State of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, with a sufficiently large penalty parameter; (2) the proposed neural network is convergent to its equilibrium point set; (3) any equilibrium point of the proposed neural network corresponds to a KKT twofold of the nonconvex problem and vice versa; (4) if the objective function and the constraint functions meet one of the following conditions: (a) the objective function and the constraint functions are convex; (b) the objective function is pseudoconvex and the constraint functions are quasiconvex, then the state of the proposed network converges to the global optimal solution. If the objective function and the constraint functions are invex with respect to the same kernel, then the state of the proposed network converges to optimal solution set. Hence, the results presented in Li, Yan, and Wang (2014) can be viewed as special cases of this paper.
The remainder of this paper is organized as follows. Section 2 introduces some definitions and preliminary results. Section 3 discusses an exact penalty function. Section 4 presented a neural network model and analyzed its convergent properties. Section 5 provides simulation results. Finally, Section 6 concludes this paper.
Section snippets
Preliminaries
In this section, we present definitions and properties concerning the set-valued analysis, nonsmooth analysis, and the generalized convex function which are needed in the remainder of the paper. We refer readers to Aubin and Cellina (1984), Cambini and Martein (2009), Clarke (1969), Filippov (1988) and Pardalos (2008) for a more thorough research.
Let be real Euclidean space with the scalar product , and its related norm . Let and ,
Exact penalty function
In this section, an appropriate neighborhood of the feasible region is given and an exact penalty function is defined and analyzed on this neighborhood.
Consider the following function: For any , we define the index sets: By Proposition 6, Proposition 8, the Clarke’s generalized gradient of is as follows: To ensure exact penalty, the following
Model analysis
Based on the exact penalty property of , the following recurrent neural network is proposed for solving the optimization problem (1):
Definition 11 is said to be an equilibrium point of system (19), if . We denote by the set of equilibrium point of (19).
Definition 12 A state of (19) on is an absolutely continuous function satisfying and .
Since is an upper semicontinuous set-valued map with nonempty
Simulation results
In this section, simulation results on three nonconvex optimization problems are provided to illustrate the effectiveness and efficiency of the proposed recurrent neural network model (19). Example 1 Consider a nonconvex optimization problem as follows: where objective function is nonconvex as shown in Fig. 1. The generalized gradient is computed as where
There are three KKT points
Conclusion
This paper presents a one-layer recurrent neural network for nonconvex optimization problems with inequality constraints based on an exact penalty design. The proposed neural network is proved to be convergent to its equilibrium point set and any equilibrium point of the neural network corresponds to a KKT point of the nonconvex problem. Moreover, it is proved that any state of the proposed neural network converges to the feasible region in finite time and stays there thereafter. Simulation
References (41)
- et al.
A recurrent neural network for solving a class of generalized convex optimization problems
Neural Networks
(2013) - et al.
A one-layer recurrent neural network for constrained nonsmooth invex optimization
Neural Networks
(2014) - et al.
A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization
Neural Networks
(2012) A dynamic system model for solving convex nonlinear optimization problems
Communications in Nonlinear Science and Numerical Simulation
(2012)A deterministic annealing neural network for convex programming
Neural Networks
(1994)- et al.
A subthreshold mos neuron circuit based on the Volterra system
IEEE Transactions on Neural Networks
(2003) - et al.
Differential inclusions
(1984) - et al.
Convergence and rate analysis of neural networks for sparse approximation
IEEE Transactions on Neural Networks and Learning Systems
(2012) - et al.
Smoothing neural network for constrained non-Lipschitz optimization with applications
IEEE Transactions on Neural Networks and Learning Systems
(2012) - et al.
Subgradient-based neural networks for nonsmooth nonconvex optimization problems
IEEE Transactions on Neural Networks
(2009)
Neural network for solving constrained convex optimization problems with global attractivity
IEEE Transactions on Circuits and Systems. I. Regular Papers
Generalized convexity and optimization: theory and applications
Recurrent neural network for nonsmooth convex optimization problems with applications to the identification of genetic regulatory networks
IEEE Transactions on Neural Networks
An analysis of a class of neural network for solving linear programming problems
IEEE Transactions on Automatic Control
Optimization and non-smooth analysis
Differential equations with discontinuous right-hand side
Generalized neural network for nonsmooth nonlinear programming problems
IEEE Transactions on Circuits and Systems. I
Convergence of neural network for programming problems via a nonsmooth Lojasiewicz inequality
IEEE Transactions on Neural Networks
A novel neural network for nonlinear convex programming
IEEE Transactions on Neural Networks
A one-layer recurrent neural network for pseudoconvex optimization subject to linear equality constraints
IEEE Transactions on Neural Networks
Cited by (82)
Inverse-free distributed neurodynamic optimization algorithms for sparse reconstruction
2024, Signal ProcessingAn adaptive neurodynamic approach for solving nonsmooth N-cluster games
2023, Neural NetworksTwo-timescale recurrent neural networks for distributed minimax optimization
2023, Neural Networks
- ☆
The work described in the paper was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China under Grant CUHK416812E; and by the National Natural Science Foundation of China under Grant 61273307.