Elsevier

Neurocomputing

Volume 8, Issue 3, August 1995, Pages 283-304
Neurocomputing

Paper
Dual-mode dynamics neural network for combinatorial optimization

https://doi.org/10.1016/0925-2312(94)00043-RGet rights and content

Abstract

This paper presents a new approach to solving combinatorial optimization problems based on a novel dynamic neural network featuring a dual-mode of network dynamics, the state dynamics and the weight dynamics. The network is referred to here as the dual-mode dynamics neural network (D2NN).

Recently, neural network approaches have been studied for solutions to combinatorial optimization problems. There are two major difficulties in the neural network approaches. First, the objective function for a given problem must have the form that can be mapped onto the network, and secondly, due to the local minima problem, the quality of the solution is quite sensitive to various factors, such as the initial state and the parameters in the objective function. The proposed scheme overcomes these difficulties (1) by maintaining the objective function separately from the network energy function, rather than mapping it onto the network, and (2) by introducing a weight dynamics utilizing the objective function to overcome the local minima problem. The state dynamics defines state trajectories in a direction to minimize the network energy specified by the current weights and states, whereas the weight dynamics generates weight trajectories in a direction to minimize a preassigned external objective function at a current state. D2NN is operated in such a way that the two modes of network dynamics alternately govern the network until an equilibrium is reached. The simulation results on the N-Queen problem and the knapsack problem indicate a superior performance of the D2NN.

References (31)

  • E. Goles-Chacc et al.

    Decreasing energy functions as a tool for studying threshold networks

    Discrete Applied Math.

    (1985)
  • A. van Ooyen et al.

    Improving the convergence of the back-propagation algorithm

    Neural Networks

    (1992)
  • S. Abe

    Convergence of the Hopfield neural networks with inequality

  • H.-M. Adorf et al.

    A discrete stochastic neural network algorithm for constraint satisfaction problems

  • D.A. Beyer et al.

    Tabu learning: A neural network search method for solving nonconvex optimization problems

  • D.E. van den Bout et al.

    A Traveling Salesman objective function that works

  • D.E. van den Bout et al.

    Graph partitioning using annealed neural networks

    IEEE Trans. Neural Networks

    (1990)
  • Y.-P.S. Foo et al.

    Stochastic neural networks for solving job-shop scheduling: Part 1. Problem representation

  • A.H. Gee et al.

    Polyhedral combinatorics and neural networks

  • L. Gislen et al.

    Complex scheduling with Potts neural networks

    Neural Computat.

    (1992)
  • B.J. Hellstrom et al.

    Knapsack packing networks

    IEEE Trans. Neural Networks

    (1992)
  • J. Hertz et al.
  • J.J. Hopfield

    Neural networks and physical systems with emergent collective computational abilities

  • J.J. Hopfield

    Neurons with graded response have collective computational properties like those of two-state neurons

  • J.J. Hopfield et al.

    ‘Neural’ computation of decisions in optimization problems

    Biol. Cybernet.

    (1985)
  • Cited by (5)

    View full text