An efficient simplified neural network for solving linear and quadratic programming problems

https://doi.org/10.1016/j.amc.2005.07.025Get rights and content

Abstract

We present a high-performance and efficiently simplified new neural network which improves the existing neural networks for solving general linear and quadratic programming problems. The network, having no need for parameter setting, results in a simple hardware requiring no analog multipliers, is shown to be stable and converges globally to the exact solution. Moreover, using this network we can solve both linear and quadratic programming problems and their duals simultaneously. High accuracy of the obtained solutions and low cost of implementation are among the features of this network. We prove the global convergence of the network analytically and verify the results numerically.

Introduction

Solving linear and quadratic programming problems of large size is considered to be one of the basic problems encountered in operations research. In many applications a real time solution of linear or quadratic programming problem is desired. Also, most optimization problems with nonlinear objective functions are usually approximated by second order models and solved numerically by a quadratic programming technique [4], [5]. Traditional algorithms such as simplex algorithm or Karmarkar’s method for solving linear programming problems are computationally too expensive. One possible alternative approach is to employ neural networks on the basis of analog circuits [1], [2], [3]. The most important advantages of the neural networks are their massively parallel processing capacity and fast convergence properties.

In 1986, Tank and Hopfield [3] proposed a neural network for solving linear programming problems which was mapped onto a closed-loop circuit. Although the equilibrium point of Tank and Hopfield network may not be a solution of the original problem, this seminal work has inspired many researchers to investigate other neural networks for solving linear and nonlinear programming problems (see [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17]). Kennedy and Chua [11] extended the Tank and Hopfield network by developing a neural network for solving nonlinear programming problems, by satisfaction of the Karush–Kuhn–Tucker optimality conditions [12]. The network proposed by Kennedy and Chua contains a penalty parameter. Thus, it generates approximate solutions only and implementation problems arise when the penalty parameter is large. To avoid the use of penalty parameters, significant work has been carried out in recent years [7], [9], [10]. For example, Rodriguez-Vazquez et al. [13] proposed a switched-capacitor neural network for solving a class of nonlinear convex programming problems. This network is suitable only for cases in which the optimal solutions lie within the feasible region. Otherwise, the network may have no equilibrium point [14]. Although the model proposed in [15] overcomes the aforementioned drawbacks and is robust for both continuous and discrete-time implementations, but still, the main disadvantage of the network is the requirement to use plenty of rather expensive analog multipliers for variables. Thus, not only the cost of the hardware implementation is very expensive, but also accuracy of solutions is greatly affected. The network of Xia [16], [17] is an improvement over the proposal in [15] in terms of accuracy and implementation cost. The network we discuss here will be both more efficient and less costly than Xia et al.’s [15], [16], [17].

The paper is organized as follows. In Section 2, we introduce the basic problem and the model for the new neural network. Section 3, discusses some theoretical aspects of the model and analyzes its global convergence. The circuit implementation of the new model and a comparative analysis are given in Section 4. Simulation results are shown in Section 5. Conclusions are given in Section 6.

Section snippets

Basic problems and neural network models

Consider the QP problem of the form:MinimizeQ(x)=12xTAx+cTx,SubjecttoDx=b,x0and its dual:MaximizeQˆ(x,y)=bTy-12xTAx,SubjecttoDTyQ(x),where ∇Q(x) = Ax + c and A is an m × m real symmetric positive semidefinite matrix, D is an n × m real matrix, y,bRn, and x,cRm. Clearly the LP problem in standard form and its dual are special cases of the QP problem and its dual for which A = 0m×m.

In [15], the following neural network model was proposed for solving problems (1), (2):ddtxy=-β(-DTy+Ax+c)+βA[x-(x+DTy-Ax-

Global convergence

In this section, we show that the new neural network described by (4) is globally convergent. We first discuss some needed results.

Theorem 1

For any x(0)Rm,y(0)Rn there is a unique solution z(t) = (x(t), y(t)) with z(0) = (x(0), y(0)) for (4).

Proof

LetF(z)=(I+A)[x-(x+DTy-Ax-c)+]D[(x+DTy-Ax-c)+]-b.Note that (x)+ is Lipschitz continuous. Then it is easy to see that F(z) is also Lipschitz continuous. From the existence result of ordinary differential equations [18], there exists a unique solution z(t) with z(0) = (x(0),

Circuit implementation of the new model and a comparison

For convenience, let r = (x + DTy  Ax  c)+. Then our proposed model (4) and the model (3) proposed in [15] are respectively represented as:ddtxy=(I+A)(r-x)-Dr+b,ddtxy=-β(DTy-Ax-c)+βA(r-x)-λ-β(Dr-b).

A block diagram of model (10) is shown in Fig. 1, where the vectors c, b are the external inputs, and the vector x, y are the network outputs. A conceptual artificial neural network (ANN) implementation of the vector r is shown in Fig. 2, where A = (gij) and D = (dij).

Remark 2

We note that the neural network

Simulation examples

We discuss the simulation results for a numerical example to demonstrate the global convergence property of the proposed neural network.

Example

Consider the following (QP) problem and its dual (DQP):(QP)(DQP)Minx12+x22+x1x2-30x1-30x2,Max3512y1+352y2+5y3+5y4-x12-x22-x1x2,s.t.512x1-x2+x3=3512,s.t.512y1+52y2-y3-2x1-x2-30,52x1+x2+x4=352,-y1+y2+y4-x1-2x2-30,x5-x1=5,y10,x2+x6=5,y20,xi0(i=1,2,,6).y30,y40.

We have written a Matlab 6.5 code for solving (4) and executed the code on a Pentium IV. We have

Concluding remarks

We have shown analytically and verified by simulation that our proposed neural network for solving the LP and QP problems is globally convergent. Our new neural network produces highly accurate solutions to the LP and QP problems and requires no analog multipliers for the variables. Hence, the proposed network, in several ways, improves over previously proposed models.

References (18)

  • L.O. Chua et al.

    Nonlinear programming without computation

    IEEE Transactions on Circuits and Systems, CAS

    (1984)
  • G. Wilson

    Quadratic programming analogs

    IEEE Transactions on Circuits and Systems, CAS

    (1986)
  • D.W. Tank et al.

    Simple neural optimization networks: an A/D converter, signal decision network, and linear programming circuit

    IEEE Transactions on Circuits and Systems, CAS

    (1986)
  • D.G. Luenberger

    Introduction to Linear and Nonlinear Programming

    (1989)
  • M.S. Bazaraa et al.

    Nonlinear Programming, Theory and Algorithms

    (1990)
  • E.K.P. Chong et al.

    An analysis of a class of neural networks for solving linear programming problems

    IEEE Transactions on Automatic Control

    (1999)
  • A. Malek, H.G. Oskoei, Numerical solutions for constrained quadratic problems using high-performance neural networks,...
  • A. Malek et al.

    Primal–dual solution for the linear programming problems using neural networks

    Applied Mathematics and Computation

    (2004)
  • Y. Leung et al.

    A high-performance feedback neural network for solving convex nonlinear programming problems

    IEEE Transactions on Neural Networks

    (2003)
There are more references available in the full text version of this article.

Cited by (0)

View full text