An efficient simplified neural network for solving linear and quadratic programming problems
Introduction
Solving linear and quadratic programming problems of large size is considered to be one of the basic problems encountered in operations research. In many applications a real time solution of linear or quadratic programming problem is desired. Also, most optimization problems with nonlinear objective functions are usually approximated by second order models and solved numerically by a quadratic programming technique [4], [5]. Traditional algorithms such as simplex algorithm or Karmarkar’s method for solving linear programming problems are computationally too expensive. One possible alternative approach is to employ neural networks on the basis of analog circuits [1], [2], [3]. The most important advantages of the neural networks are their massively parallel processing capacity and fast convergence properties.
In 1986, Tank and Hopfield [3] proposed a neural network for solving linear programming problems which was mapped onto a closed-loop circuit. Although the equilibrium point of Tank and Hopfield network may not be a solution of the original problem, this seminal work has inspired many researchers to investigate other neural networks for solving linear and nonlinear programming problems (see [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17]). Kennedy and Chua [11] extended the Tank and Hopfield network by developing a neural network for solving nonlinear programming problems, by satisfaction of the Karush–Kuhn–Tucker optimality conditions [12]. The network proposed by Kennedy and Chua contains a penalty parameter. Thus, it generates approximate solutions only and implementation problems arise when the penalty parameter is large. To avoid the use of penalty parameters, significant work has been carried out in recent years [7], [9], [10]. For example, Rodriguez-Vazquez et al. [13] proposed a switched-capacitor neural network for solving a class of nonlinear convex programming problems. This network is suitable only for cases in which the optimal solutions lie within the feasible region. Otherwise, the network may have no equilibrium point [14]. Although the model proposed in [15] overcomes the aforementioned drawbacks and is robust for both continuous and discrete-time implementations, but still, the main disadvantage of the network is the requirement to use plenty of rather expensive analog multipliers for variables. Thus, not only the cost of the hardware implementation is very expensive, but also accuracy of solutions is greatly affected. The network of Xia [16], [17] is an improvement over the proposal in [15] in terms of accuracy and implementation cost. The network we discuss here will be both more efficient and less costly than Xia et al.’s [15], [16], [17].
The paper is organized as follows. In Section 2, we introduce the basic problem and the model for the new neural network. Section 3, discusses some theoretical aspects of the model and analyzes its global convergence. The circuit implementation of the new model and a comparative analysis are given in Section 4. Simulation results are shown in Section 5. Conclusions are given in Section 6.
Section snippets
Basic problems and neural network models
Consider the QP problem of the form:and its dual:where ∇Q(x) = Ax + c and A is an m × m real symmetric positive semidefinite matrix, D is an n × m real matrix, , and . Clearly the LP problem in standard form and its dual are special cases of the QP problem and its dual for which A = 0m×m.
In [15], the following neural network model was proposed for solving problems (1), (2):
Global convergence
In this section, we show that the new neural network described by (4) is globally convergent. We first discuss some needed results. Theorem 1 For any there is a unique solution z(t) = (x(t), y(t)) with z(0) = (x(0), y(0)) for (4). Proof LetNote that (x)+ is Lipschitz continuous. Then it is easy to see that F(z) is also Lipschitz continuous. From the existence result of ordinary differential equations [18], there exists a unique solution z(t) with z(0) = (x(0),
Circuit implementation of the new model and a comparison
For convenience, let r = (x + DTy − Ax − c)+. Then our proposed model (4) and the model (3) proposed in [15] are respectively represented as:
A block diagram of model (10) is shown in Fig. 1, where the vectors c, b are the external inputs, and the vector x, y are the network outputs. A conceptual artificial neural network (ANN) implementation of the vector r is shown in Fig. 2, where A = (gij) and D = (dij). Remark 2 We note that the neural network
Simulation examples
We discuss the simulation results for a numerical example to demonstrate the global convergence property of the proposed neural network. Example Consider the following (QP) problem and its dual (DQP):
We have written a Matlab 6.5 code for solving (4) and executed the code on a Pentium IV. We have
Concluding remarks
We have shown analytically and verified by simulation that our proposed neural network for solving the LP and QP problems is globally convergent. Our new neural network produces highly accurate solutions to the LP and QP problems and requires no analog multipliers for the variables. Hence, the proposed network, in several ways, improves over previously proposed models.
References (18)
- et al.
Nonlinear programming without computation
IEEE Transactions on Circuits and Systems, CAS
(1984) Quadratic programming analogs
IEEE Transactions on Circuits and Systems, CAS
(1986)- et al.
Simple neural optimization networks: an A/D converter, signal decision network, and linear programming circuit
IEEE Transactions on Circuits and Systems, CAS
(1986) Introduction to Linear and Nonlinear Programming
(1989)- et al.
Nonlinear Programming, Theory and Algorithms
(1990) - et al.
An analysis of a class of neural networks for solving linear programming problems
IEEE Transactions on Automatic Control
(1999) - A. Malek, H.G. Oskoei, Numerical solutions for constrained quadratic problems using high-performance neural networks,...
- et al.
Primal–dual solution for the linear programming problems using neural networks
Applied Mathematics and Computation
(2004) - et al.
A high-performance feedback neural network for solving convex nonlinear programming problems
IEEE Transactions on Neural Networks
(2003)