Elsevier

Neural Networks

Volume 61, January 2015, Pages 10-21
Neural Networks

A one-layer recurrent neural network for constrained nonconvex optimization

https://doi.org/10.1016/j.neunet.2014.09.009Get rights and content

Abstract

In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush–Kuhn–Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.

Introduction

In this paper, the following constrained nonconvex minimization problem is considered: minimize  f(x)subject to  gi(x)0,iI={1,2,,m}, where xRn is the decision vector; f and gi,:RnR(iI) are continuously differentiable functions, but not necessarily convex. The feasible region F={xRn:gi(x)0,iI} is assumed to be a nonempty set. We denote by G the set of global solutions of problem (1) as, G={xF:f(y)f(x),yF}.

Many problems in engineering applications can be formulated as dynamic optimization problems such as kinematic control of redundant robot manipulators (Wang, Hu, & Jiang, 1999), nonlinear model predictive control (Piche et al., 2000, Yan and Wang, 2012), hierarchical control of interconnected dynamic systems (Hou, Gupta, Nikiforuk, Tan, & Cheng, 2007), compressed sensing in adaptive signal processing (Balavoine, Romberg, & Rozell, 2012), and so on. For example, real-time motion planning and control of redundant robot manipulators can be formulated as constrained dynamic optimization problems with nonconvex objective functions for simultaneously minimizing kinetic energy and maximizing manipulability. Similarly, in nonlinear and robust model predictive control, optimal control commands have to be computed with a moving time window by repetitively solving constrained optimization problems with nonconvex objective functions for error and control variation minimization, and robustness maximization. The difficulty of dynamic optimization is significantly amplified when the optimal solutions have to be obtained in real time, especially in the presence of uncertainty. In such applications, compared with traditional numerical optimization algorithms, neurodynamic optimization approaches based on recurrent neural networks have several unique advantages. Recurrent neural networks can be physically implemented in designated hardware/firmware, such as very-large-scale integration (VLSI) reconfigurable analog chips, optical chips, graphic processing units (GPU), field programmable gate array (FPGA), digit signal processor (DSP), and so on. Recent technological advances make the design and implementation of neural networks more feasible at a more reasonable cost (Asai, Kanazawa, & Amemiya, 2003).

Since the pioneering work of Hopfield neural networks (Hopfield and Tank, 1985, Tank and Hopfield, 1986), neurodynamic optimization has achieved great success in the past three decades. For example, a deterministic annealing neural network was proposed for solving convex programming problems (Wang, 1994), a Lagrangian network was developed for solving convex optimization problems with linear equality constraints based on the Lagrangian optimality conditions (Xia, 2003), the primal–dual network (Xia, 1996), the dual network (Xia, Feng, & Wang, 2004), and the simplified dual network (Liu & Wang, 2006) were developed for solving convex optimization problems based on the Karush–Kuhn–Tucker optimality conditions, projection neural networks were developed for constrained optimization problems based on the projection method (Gao, 2004, Hu and Wang, 2007, Liu et al., 2010, Xia et al., 2002). In recent years, neurodynamic optimization approaches have been extended to nonconvex and generalized convex optimization problems. For example, a Lagrangian neural network was proposed for nonsmooth convex optimization by using the Lagrangian saddle-point theorem (Cheng et al., 2011), a recurrent neural network with global attractivity was proposed for solving nonsmooth convex optimization problems (Bian & Xue, 2013), several neural networks were developed for nonsmooth pseudoconvex or quasiconvex optimization using the Clarke’s generalized gradient (Guo et al., 2011, Hosseini et al., 2013, Hu and Wang, 2006, Liu et al., 2012, Liu and Wang, 2013). In addition, various neural networks with finite-time convergence property were developed (Bian and Xue, 2009, Forti et al., 2004, Forti et al., 2006, Xue and Bian, 2008).

Despite the enormous success, neurodynamic optimization approaches would reach their solvability limits at constrained optimization problems with unimodal objective functions and are important for global optimization with general nonconvex objective functions. Little progress has been made on nonconvex optimization in the neural network community. Instead of seeking global optimal solutions, a more attainable and meaningful goal is to design neural networks for searching critical points (e.g., Karush–Kuhn–Tucker points) of nonconvex optimization problems. Xia, Feng, and Wang (2008) proposed a neural network for solving nonconvex optimization problems with inequality constraints, whose equilibrium points correspond to the KKT points. But the condition that the Hessian matrix of the associated Lagrangian function is positive semidefinite for the global convergence is too strong. In this paper, a one-layer recurrent neural network based on an exact penalty function method is proposed for searching KKT points of nonconvex optimization problems with inequality constraints. The contribution of this paper can be summarized as follows. (1) State of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, with a sufficiently large penalty parameter; (2) the proposed neural network is convergent to its equilibrium point set; (3) any equilibrium point x of the proposed neural network corresponds to a KKT twofold (x,λ) of the nonconvex problem and vice versa; (4) if the objective function and the constraint functions meet one of the following conditions: (a) the objective function and the constraint functions are convex; (b) the objective function is pseudoconvex and the constraint functions are quasiconvex, then the state of the proposed network converges to the global optimal solution. If the objective function and the constraint functions are invex with respect to the same kernel, then the state of the proposed network converges to optimal solution set. Hence, the results presented in Li, Yan, and Wang (2014) can be viewed as special cases of this paper.

The remainder of this paper is organized as follows. Section  2 introduces some definitions and preliminary results. Section  3 discusses an exact penalty function. Section  4 presented a neural network model and analyzed its convergent properties. Section  5 provides simulation results. Finally, Section  6 concludes this paper.

Section snippets

Preliminaries

In this section, we present definitions and properties concerning the set-valued analysis, nonsmooth analysis, and the generalized convex function which are needed in the remainder of the paper. We refer readers to Aubin and Cellina (1984), Cambini and Martein (2009), Clarke (1969), Filippov (1988) and Pardalos (2008) for a more thorough research.

Let Rn be real Euclidean space with the scalar product x,y=i=1nxiyi, x,yRn and its related norm x=[i=1nxi2]12. Let xRn and ARn, dist(x,A)=infy

Exact penalty function

In this section, an appropriate neighborhood of the feasible region F is given and an exact penalty function is defined and analyzed on this neighborhood.

Consider the following function: V(x)=i=1mmax{0,gi(x)}. For any xRn, we define the index sets: I0(x)={i:gi(x)=0,iI},I+(x)={i:gi(x)>0,iI},I(x)={i:gi(x)<0,iI}. By Proposition 6, Proposition 8, the Clarke’s generalized gradient of V(x) is as follows: V(x)=iI+(x)gi(x)+iI0(x)[0,1]gi(x). To ensure exact penalty, the following

Model analysis

Based on the exact penalty property of Eσ(x), the following recurrent neural network is proposed for solving the optimization problem (1): ẋ(t)f(x)1σi=1mmax{0,gi(x)}.

Definition 11

x̄D is said to be an equilibrium point of system (19), if 0Eσ(x̄). We denote by E(σ) the set of equilibrium point of (19).

Definition 12

A state x() of (19) on [0,t1] is an absolutely continuous function satisfying x(0)=x0 and ẋ(t)Eσ(x(t))  almost all on  [0,t1].

Since Eσ() is an upper semicontinuous set-valued map with nonempty

Simulation results

In this section, simulation results on three nonconvex optimization problems are provided to illustrate the effectiveness and efficiency of the proposed recurrent neural network model (19).

Example 1

Consider a nonconvex optimization problem as follows: minf(x)=2x1x2subject to  x1+4x210,x12+x2210, where objective function is nonconvex as shown in Fig. 1. The generalized gradient Eσ is computed as Eσ(x)=(x2,x1)T+(1/σ)V(x), where V(x)=max{0,x1+4x21}+max{0,x12+x221}.

There are three KKT points

Conclusion

This paper presents a one-layer recurrent neural network for nonconvex optimization problems with inequality constraints based on an exact penalty design. The proposed neural network is proved to be convergent to its equilibrium point set and any equilibrium point of the neural network corresponds to a KKT point of the nonconvex problem. Moreover, it is proved that any state of the proposed neural network converges to the feasible region in finite time and stays there thereafter. Simulation

References (41)

  • W. Bian et al.

    Neural network for solving constrained convex optimization problems with global attractivity

    IEEE Transactions on Circuits and Systems. I. Regular Papers

    (2013)
  • A. Cambini et al.

    Generalized convexity and optimization: theory and applications

    (2009)
  • L. Cheng et al.

    Recurrent neural network for nonsmooth convex optimization problems with applications to the identification of genetic regulatory networks

    IEEE Transactions on Neural Networks

    (2011)
  • E. Chong et al.

    An analysis of a class of neural network for solving linear programming problems

    IEEE Transactions on Automatic Control

    (1999)
  • F. Clarke

    Optimization and non-smooth analysis

    (1969)
  • A. Filippov

    Differential equations with discontinuous right-hand side

    (1988)
  • M. Forti et al.

    Generalized neural network for nonsmooth nonlinear programming problems

    IEEE Transactions on Circuits and Systems. I

    (2004)
  • M. Forti et al.

    Convergence of neural network for programming problems via a nonsmooth Lojasiewicz inequality

    IEEE Transactions on Neural Networks

    (2006)
  • X. Gao

    A novel neural network for nonlinear convex programming

    IEEE Transactions on Neural Networks

    (2004)
  • Z. Guo et al.

    A one-layer recurrent neural network for pseudoconvex optimization subject to linear equality constraints

    IEEE Transactions on Neural Networks

    (2011)
  • Cited by (82)

    View all citing articles on Scopus

    The work described in the paper was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China under Grant CUHK416812E; and by the National Natural Science Foundation of China under Grant 61273307.

    View full text