Skip to main content
Log in

Equivalence between some dynamical systems for optimization

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

It is shown by the derivation of solution methods for an elementary optimization problem that the stochastic relaxation in image analysis, the Potts neural networks for combinatorial optimization and interior point methods for nonlinear programming have common formulation of their dynamics. This unification of these algorithms leads us to possibility for real time solution of these problems with common analog electronic circuits.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. R.A. Hummel, S.W. Zucker. On the foundations of relaxation labeling processes, IEEE Trans. Patt. Anal. Mac. Intelli., PAMI-5, pp.267–286, 1983.

    Google Scholar 

  2. J. Hopfield, D. Tank. Neural computation of decision in optimization problems,Biol. Cybern., 52, pp. 141–152, 1985.

    MathSciNet  Google Scholar 

  3. L.E. Faybusovich. Interior point methods and entropy,1991 ICDC, pp.2094–2095, 1991.

  4. A. Yuille, D. Kosowsky. Statistical physics algorithms that converge,Neural Comp., 6, pp.341–356, 1994.

    Google Scholar 

  5. U. Helmke, J.B. Moore. Optimization and dynamical systems, Springer-Verlag, 1994.

  6. L.E. Baum, G.R. Sell. Growth transformation for functions on manifolds,Pac. J. Math., 27, pp.211–227, 1968.

    MathSciNet  Google Scholar 

  7. S.E. Levinson, L.R. Rabiner, M.M. Sondhi. An introduction to the application of the theory of probabilistic functions of Markov process to automatic speech recognition,Bell Syst. Tech. J., 62, pp. 1035–1074, 1983.

    MathSciNet  Google Scholar 

  8. K. Urahama, S. Ueno. A gradient system solution to Potts mean field equations and its electronic implementation,Int. J. Neural Syst., 4, pp.27–34, 1993.

    Article  Google Scholar 

  9. K. Urahama. Performance of neural algorithms for maximum-cut problems,J. Circuit, Syst. Comput., 2, pp.389–395, 1992.

    Google Scholar 

  10. M. Thathachar, P.S. Sastry. Relaxation labeling with learning automata,IEEE Trans. Patt. Anal. Mach. Intelli., PAMI-8, pp.256–267, 1986.

    Google Scholar 

  11. E. Akin.The geometry of population genetics, Springer-Verlag, 1979.

  12. J. Aitchison. The statistical analysis of compositional data,J. Roy. Stat. Soc., B44, pp.139–177, 1982.

    MathSciNet  Google Scholar 

  13. C. Peterson, B. Soderberg. A new method for mapping optimization problems onto neural networks,Int. J. Neural Syst., 1, pp.3–22, 1989.

    Article  Google Scholar 

  14. C. Mead.Analog VLSI and neural systems, Addison-Wesley, 1989.

  15. K. Urahama. Analog method for solving combinatorial optimization problems,IEICE Trans. Fundamentals, E77-A, pp.302–308, 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Urahama, K. Equivalence between some dynamical systems for optimization. Neural Process Lett 1, 14–17 (1994). https://doi.org/10.1007/BF02310937

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02310937

Keywords

Navigation