Skip to main content
Log in

An adiabatic neural network for RBF approximation

  • Articles
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

Numerous studies have addressed nonlinear functional approximation by multilayer perceptrons (MLPs) and RBF networks as a special case of the more general mapping problem. The performance of both these supervised network models intimately depends on the efficiency of their learning process. This paper presents an unsupervised recurrent neural network, based on the recurrent Mean Field Theory (MFT) network model, that finds a least-squares approximation to an arbitrary L2 function, given a set of Gaussian radially symmetric basis functions (RBFs). Essential is the reformulation of RBF approximation as a problem of constrained optimisation. A new concept of adiabatic network organisation is introduced. Together with an adaptive mechanism of temperature control this allows the network to build a hierarchical multiresolution approximation with preservation of the global optimisation characteristics. A revised problem mapping results in a position invariant local interconnectivity pattern, which makes the network attractive for electronic implementation. The dynamics and performance of the network are illustrated by numerical simulation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Powell MJD. Radial basis functions for multivariable interpolation: a review. In: Algorithms for Approximation (JC Mason, MG Cox, eds.), Oxford University Press, Oxford, 1987; 143–176

    Google Scholar 

  2. Michelli CA. Interpolation of scattered data: Distance matrices and conditional positive definite functions. Constructive Approximation 1986; 2: 11–22

    Google Scholar 

  3. Dyn N, Levin D. Iterative solution of systems originating from integral equations and surface interpolation, SIAM Numerical Anal 1983; 20: 377–390

    Google Scholar 

  4. Hecht-Nielsen R. Kolmogorov's mapping neural network existence theorem. In: Proc. IEEE Int Conf on Neural Networks 1987; 11–14

  5. Kolmogorov AN. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. Doklady Akademii Nauk SSSR 1957; 144: 679–681

    Google Scholar 

  6. Sprechner DA. On the structure of continuous functions of several variables. Trans Am Math Soc 1965; 115

  7. Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Networks 1989; 2: 359–366

    Google Scholar 

  8. Caroll SM, Dickinson BW. Construction of neural nets using the Radon transform. In: Proc. IEEE Int Joint Conf on Neural Networks 1989; 1: 607–611

    Google Scholar 

  9. Cybenko G. Approximations by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Syst 1989; 2: 303–314

    Google Scholar 

  10. Lapedes A, Farber R. Nonlinear signal processing using neural networks: Prediction and system modelling. Los Alamos National Laboratory, Technical Report LA-UR-87, 1987.

  11. Broomhead DS, Lowe D. Multivariable functional interpolation and adaptive networks. Complex Syst 1988; 2: 321–355

    Google Scholar 

  12. Poggio T, Girossi F. Regularization algorithms for learning that are equivalent to multilayer networks. Science 1990; 247: 978–982

    Google Scholar 

  13. Lippmann RP. Pattern classification using neural networks. IEEE Comm Mag 1989; 47–64

  14. Moody J, Darken C. Fast learning in networks of locallytuned processing units. Neural Computation 1989; 1: 281–294

    Google Scholar 

  15. Park J, Sandberg IW. Universal approximation using Radial-Basis-Function networks. Neural Computation 1991; 3: 246–257

    Google Scholar 

  16. Judd S. On the complexity of loading shallow neural networks. J Complexity 1988; 4: 177–192

    Google Scholar 

  17. Widrow G, Hoff ME. Adaptive switching circuits. In: Proc. Western Electronic Show 1960; 4: 47–64

    Google Scholar 

  18. Moody J. Fast learning in multi-resolution hierarchies. In: Advances in neural information processing systems (ID Touretzky, ed.) Morgan Kaufmann, San Mateo, CA, 1989; 29–39

    Google Scholar 

  19. Platt J. A resource-allocating network for function interpolation. Neural Computation 1991; 3: 213–225

    Google Scholar 

  20. Lee S, Kil RM. A Gaussian potential function network with hierarchically self-organizing learning. Neural Networks 1991; 4: 207–224

    Google Scholar 

  21. Tank DW, Hopfield JJ. Simple neural optimization networks: An A/D converter, signal decision network circuit, and a linear programming circuit. IEEE Trans Circuits and Syst 1986; 33: 533–541

    Google Scholar 

  22. Hopfield JJ. Neurons with graded response have collective computational properties like those of two-state neurons. Proc Natl Acad Sci USA — Biophysics 1984; 81: 3088–3092

    Google Scholar 

  23. Peterson C, Anderson JR. A mean field theory learning algorithm for neural networks. Complex Syst 1987; 1: 995–1019

    Google Scholar 

  24. Schumaker LL. Fitting surfaces to scattered data. In: Approximation Theory, vol. 2 (GC Lorentz, CK Chui, LL Schumaker, eds.), Academic Press, New-York, 1976

    Google Scholar 

  25. Sabin MA. Contouring — A review of methods for scattered data. In: Mathematics in Computer Graphics and Design (KW Brodley, ed), Academic Press, London, 1980; 63–86

    Google Scholar 

  26. Franke R. Scattered data interpolation: Tests of some methods. Mathematical Comput 1982; 38: 181–200

    Google Scholar 

  27. Dyn N. Interpolation of scattered data by radial functions. In: Topics in Multivariate Approximation (CK Chui, LL Schumaker, FI Utreras, eds.), Academic Press, New York, 1987: 47–61

    Google Scholar 

  28. Chen S, Cowan CFN, Grant PM. Orthogonal least squares learning algorithm for radial basis function networks. IEEE Trans Neural Networks 1991; 2: 302–309

    Google Scholar 

  29. Powell MJD. Radial basis functions approximations to polynomials. In: Proc 12th Biennial Numerical Analysis Conf 1987; 223–241

  30. Rippa S. Interpolation and smoothing of scattered data by radial basis functions. M.Sc. Thesis, Tel Aviv University, 1984

  31. Mees AI, Jackson MF, Chua LO. Device modeling by radial basis functions. IEEE Trans Circuits and Syst-I: Fundamental Theory and Applications 1992; 39: 19–27

    Google Scholar 

  32. Rummelhart DE, McClelland L. Parallel Distributed Processing — Explorations in the Microstructure of Cognition, Volume 1: Foundations, MIT Press, Cambridge, MA, 1988

    Google Scholar 

  33. Varga RS. Matrix Iterative Analysis, Automatic Computation Series (G Forsythe, ed), Prentice-Hall, Englewood Cliffs, NJ, 1962

    Google Scholar 

  34. Chua LO, Lin PM. Computer Aided Analysis of Electronic Circuits: Algorithms and computational techiques, Prentice-Hall, Englewood Cliffs, NJ, 1975

    Google Scholar 

  35. Gill PE, Murrray W, Wright MH. Practical Optimization, Academic Press, San Diego, CA, 1981

    Google Scholar 

  36. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Nat Acad Sci USA — Biophysics 1982; 79: 2554–2558

    Google Scholar 

  37. Edwards SF, Anderson PW. Theory of spin glasses. J Phys F: Metal Phys 1975; 5: 965–974

    Google Scholar 

  38. Alspector J, Zeppenfield T, Luna S. A volatility measure for annealing in feedback neural networks. Neural Comput 1992; 4: 191–195

    Google Scholar 

  39. Kirkpatrick S, Gelatt CD, Vecchi Jr. MP. Optimization by simulated annealing. Science 1983; 220: 671–680

    Google Scholar 

  40. Golub GH, Van Loan CF. Matrix Computations, John Hopkins University Press, Baltimore, MD, 1991

    Google Scholar 

  41. Kamp Y, Hasler M. Recursive Neural Networks for Associative Memory, Wiley, Chichester, 1990

    Google Scholar 

  42. Truyen B, Cornelis J, Vandervelden Ph. Image reconstruction in Electrical Impedance Tomography: A Selfadaptive Neural Network Approach. Proc IEEE EMBS 1993; 1: 72–73

    Google Scholar 

  43. Anderson JA. Neural models with cognitive implications. In: Basic Processes in Reading Perception and Comprehension (D Laberge, SJ Samuels, eds.), Erlbaum, Hillsdale, NJ, 1977; 27–90

    Google Scholar 

  44. Baum EB, Haussler D. What size net gives valid generalization? Neural Comput 1989; 1: 151–160

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Truyen, B., Langloh, N. & Cornelis, J. An adiabatic neural network for RBF approximation. Neural Comput & Applic 2, 69–88 (1994). https://doi.org/10.1007/BF01414351

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01414351

Keywords

Navigation