Skip to main content
Log in

Stochastic neural networks

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

The first purpose of this paper is to present a class of algorithms for finding the global minimum of a continuous-variable function defined on a hypercube. These algorithms, based on both diffusion processes and simulated annealing, are implementable as analog integrated circuits. Such circuits can be viewed as generalizations of neural networks of the Hopfield type, and are called “diffusion machines.”

Our second objective is to show that “learning” in these networks can be achieved by a set of three interconnected diffusion machines: one that learns, one to model the desired behavior, and one to compute the weight changes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,Proc. Nat. Acad. Sci. USA,81 (1984), 3088–3092.

    Article  Google Scholar 

  2. L. O. Chua and G. N. Lin, Nonlinear programming without computation,IEEE Trans. Circuits and Systems,31 (1984), 182–188.

    Article  MathSciNet  Google Scholar 

  3. J. J. Hopfield and D. W. Tank, Neural computation of decisions optimization problems,Biol. Cybernet,52 (1985), 141–152.

    MATH  MathSciNet  Google Scholar 

  4. M. Takeda and J. W. Goodman, Neural networks for computation: number representations and programming complexity,Appl. Optics,25 (1986), 3033–3046.

    Google Scholar 

  5. D. H. Ackley, G. W. Hinton and T. J. Sejnowski, A learning algorithm for Boltzmann machines,Cognitive Sci. 9 (1985), 147–169.

    Article  Google Scholar 

  6. S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, Optimization by simulated annealing,Science,220 (1983), 671–680.

    Article  MathSciNet  Google Scholar 

  7. D. Mitra, F. Romeo, and A. Sangiovanni-Vincentelli, Convergence and finite-time behavior of simulated annealing,Adv. in Appl. Probab.,18 (1986), 747–771.

    Article  MATH  MathSciNet  Google Scholar 

  8. B. Hajek, Cooling schedules for optimal annealing,Math, Oper. Res.,13 (1988), 311–319.

    Article  MATH  MathSciNet  Google Scholar 

  9. S. Geman and C. R. Hwang, Diffusions for global optimization,SIAM J. Control Optim.,24 (1986), 1031–1043.

    Article  MATH  MathSciNet  Google Scholar 

  10. B. Guidas, Global optimization via the Langevin equation,Proc. 24th IEEE Conference on Decision and Control, 1985, pp. 774–778.

  11. E. Wong and M. Zakai, On the convergence of ordinary integrals to stochastic integrals,Ann. Math Statist.,36 (1965), 1560–1564.

    Article  MATH  MathSciNet  Google Scholar 

  12. E. Wong and B. Hajek,Stochastic Processes in Engineering Systems, Springer-Verlag, New York, 1984.

    Google Scholar 

  13. J. Alspector and R. B. Allen, A neuromorphic VLSI learning system, inAdvanced Research in VLSI, Proc. 1987 Stanford Conference, P. Losleben, ed., MIT Press, Cambridge, MA, 1987, pp. 313–349.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Communicated by Alberto Sangiovanni-Vincentelli.

This research was supported in part by U.S. Army Research Office Grant DAAL03-89-K-0128.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wong, E. Stochastic neural networks. Algorithmica 6, 466–478 (1991). https://doi.org/10.1007/BF01759054

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01759054

Key words

Navigation