Short Communication
The impact of energy function structure on solving generalized assignment problem using Hopfield neural network

https://doi.org/10.1016/j.ejor.2004.06.015Get rights and content

Abstract

In the last 20 years, neural networks researchers have exploited different penalty based energy functions structures for solving combinatorial optimization problems (COPs) and have established solutions that are stable and convergent. These solutions, however, have in general suffered from lack of feasibility and integrality. On the other hand, operational researchers have exploited different methods for converting a constrained optimization problem into an unconstrained optimization problem. In this paper we have investigated these methods for solving generalized assignment problems (GAPs). Our results concretely establishes that the augmented Lagrangean method can produce superior results with respect to feasibility and integrality, which are currently the main concerns in solving neural based COPs.

Introduction

It has been about two decades since neural networks were first applied to solve combinatorial optimization problems (COPs). It was expected that the inherent parallel processing and analog nature of the interconnected neurons or neural net could create a rapid and powerful solution technique.

In 1985, Hopfield and Tank [7] proposed a model to solve TSP, a famous COP. The network, as a dynamical system, was represented by an energy function that should be made equivalent to the objective function of the combinatorial problem that needs to be minimized, while the constraints of the problem are included in the energy function as penalty terms. They simulated a network of 10 cities (100 neurons), chosen at random on the interior of a two-dimensional unit square. For 20 random starts, 16 converged to valid tours. About 50% of the trials produced one of the two known shortest tours.

In 1988 Wilson and Pawley [13] raised serious doubts as to the reliability and validity of the H–T approach to solving TSPs. They tried to reproduce the H–T solutions for the 10-city problem. Based on 100 random starts, only 15 converged to valid tours, while 45 froze into local minima corresponding to invalid tours, and the remaining 40 did not converge within 1000 iterations. Moreover, for the 15 valid tours they were only slightly better than randomly chosen tours. These results were dramatically different from those of Hopfield and Tank on the identical problem. They felt that it is the structure of energy function, and not the initialization scheme, which is the root of the problems.

In 1992, Abe et al. [1] extended the Hopfield neural networks model to solve inequality constrained combinatorial optimization problems where linear combinations of variables are lower- or upper-bounded. They proposed lower bounds on the energy function penalty terms that could guarantee feasibility. They tested their method on a small instance of knapsack problem (with 6 items) and a small instance of transportation problem (with 6 nodes). However, solutions thus obtained were feasible only if integrality constraints were relaxed, in the contrary to what is expected in COPs.

In 1996, Watta and Hassoun [12] developed a neural net for mixed integer programming. In this network continuous neurons were used to represent both continuous and integer variables and integrality constraints were relaxed. When the network converged, the outputs of these continuous neurons were threshold to yield integer solutions. This is similar to solving a mixed integer program using LP relaxation and rounding up and down the results to obtain integral values. From OR point of view, it is obvious that the solutions so obtained is infeasible in most cases.

In 1999, Walsh et al. [11] showed that between 20% and 45% of experiments on solving a 10 generators scheduling problem using model of Watta and Hassoun [12] led to infeasible solutions. They considered that the difficulty is due to the fact that continuous neurons were used to represent integer variables. They modified the previous model by incorporating discrete neurons to represent integer variables. They tested their model on a 17 generator scheduling problem in which convergence into infeasible solutions were seen.

In 1999, Gall and Zissimopoulos [5] improved the work of Abe et al. [1] and proposed a modification into the penalty function structure of Abe et al. called competitive activation mechanism (CAM), by which more integral values could be produced. They tested their method on a group of medium size set covering problem instances with number of sets ranging between 60 and 150 and number of elements ranging between 30 and 50. Still they reported that on average 20% of variables reach their stable states while having intermediate values, between 0 and 1, not maintaining integrality constraints.

Upon the analysis provided above we can conclude that in spite of the two decades of research [3], [4], [7], [9], [10], neural based combinatorial solutions have suffered from integrality and feasibility properties though stability and convergence have been maintained. To improve this situation, we have investigated four different methods proposed in OR literature, for building energy function structure to solve general assignment problems (GAPs).

While there existed a number of reliable and efficient exact and approximate algorithms for solving GAPs [14], neural network modeling is important for their promise of rapid execution through hardware implementation. Many real life applications demand this feature including crew scheduling and weapon target assignments [10].

In Section 2 the GAP is defined and a numerical example is provided. In Section 3 the problem is solved using four different methods and the results are analyzed. It is shown that the augmented Lagrangean method can improve the solutions with respect to feasibility and integrality properties, while maintaining stability and convergence properties. Conclusions are drawn in Section 4.

Section snippets

Solving GAPs by using neural networks

Operational researchers have developed and exploited different methods for converting a constrained optimization problem into unconstrained problem [2]. However, in the literature of neural network it is the exterior penalty function method that is mainly used [10]. The generalized assignment problem (GAP) which is a NP-hard problem is defined as:Mini=1mj=1ncijxijs.t.j=1naijxijbii=1,,m,i=1mxij=1j=1,,n,xij{0,1}i=1,,mj=1,,n,where cij defines the cost associated with assigning job i

Results for exterior penalty function

The conversion of GAP isθ(x,μ)=i=13j=18cijxij+μ2j=18i=13xij-12+μ2i=13min20,bi-j=18aijxij+μ2i=13j=18xij2(1-xij)2.

This is an unconstrained optimization problem in which μ is the penalty parameter. Note that we have adopted quadratic concave equality constraints to replace integral constraints of zeros and ones. In order to build its dynamic system we have the differential equations:dxpqdt=-ρcpq+μi=13xiq-1-μapqmin0,bp-j=18apjxpj+μxpq(1-xpq)(1-2xpq)p=1,2,3,q=1,,8,where ρ is the learning

Conclusions

In this paper we discussed that stability, feasibility, integrality and optimality are important criteria in the design of neural based combinatorial optimization methods. The former two criteria have been mostly investigated while the latter two criteria have been open for further investigations.

We have considered four different methods from OR literature to structure the energy function of the neural network and have illustrated that the augmented Lagrangean method could produce superior

Acknowledgement

The authors are thankful to our two referees for their helpful comments.

References (14)

  • S. Abe et al.

    Solving inequality constrained combinatorial optimization problems by the Hopfield neural networks

    Neural Networks

    (1992)
  • M.S. Bazara et al.

    Nonlinear Programming Theory and Algorithms

    (1993)
  • Y.H. Chen et al.

    Neurocomputing with time delay analysis for solving convex quadratic programming problems

    IEEE Transactions on Neural Networks

    (2000)
  • G. Galan-Marin et al.

    Design and analysis of maximum Hopfield networks

    IEEE Transactions on Neural Networks

    (2001)
  • A. Gall et al.

    Extended Hopfield models for combinatorial optimization

    IEEE Transactions on Neural Networks

    (1999)
  • D. Gong et al.

    Neural network approach for general assignment problem

    Proc. Int. Conf. Neural Networks

    (1995)
  • J.J. Hopfield et al.

    Neural computation of decisions in optimization problems

    Biological Cybernetics

    (1985)
There are more references available in the full text version of this article.

Cited by (10)

  • Solving the reliability-oriented generalized assignment problem by Lagrangian relaxation and Alternating Direction Method of Multipliers

    2022, Expert Systems with Applications
    Citation Excerpt :

    Savelsbergh (1997) solved the GAP by the branch-and-price method based on the set partitioning formulation. Monfared and Etemadi (2006) discussed the impact of energy function structure on solving GAP using the Hopfield neural network and showed that the augmented Lagrangean method could produce superior results with respect to feasibility and integrality. However, the quadratic terms in the augmented Lagrangian relaxation function destroy the problem decomposability.

  • Combining data reduction, MIP solver and iterated local search for generalized assignment

    2022, International Journal of Management Science and Engineering Management
  • Hybrid metaheuristic for generalised assignment

    2020, International Journal of Innovative Computing and Applications
View all citing articles on Scopus
View full text