Skip to main content

Choosing and using a neural net

  • Chapter
  • First Online:
Artificial Neural Networks

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 931))

Abstract

To recapitulate, there are a great many ways of solving particular problems, a neural network is only one of them. We should stress that the approach is essentially one at the algorithmic level, together with possible implementational consequences in the future. For someone with a specific problem it is first necessary to analyse that problem in terms of the features discussed here, such as whether learning is necessary, generalisation, type of input etc. Such an analysis can provide a pattern against which it should be possible to see if there is a match with one of the architectures in Table 1. If so, that provides a strong suggestion that the matching architecture may prove useful and useable. If no such match exists, it is then possible to ask whether neural networks really are the appropriate tool at all or, alternatively, either to search for the nearest network architecture or for one not mentioned in our scheme.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • D. Ackley, G. Hinton, and T. Sejnowski (1985) A learning algorithm for Boltzmann machines, Cognitive Science, 9, 147–169.

    Article  Google Scholar 

  • J.A. Anderson, J.W. Silverstein, S.A. Ritz, and R.S. Jones (1977) Distinctive features, categorical perception, and probability learning: Some applications of a neural model. Psychological Review, 84, 413–451.

    Article  Google Scholar 

  • A. Barto, R. Sutton and C. Anderson (1983) Neuron-like adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 834–846.

    Google Scholar 

  • G.A. Carpenter and S. Grossberg (1987a) A massively parallel architecture for a selforganizing neural pattern recognition machine. Computer vision, graphics, and image processing, Vol. 37, 54–115.

    Google Scholar 

  • G.A. Carpenter and S. Grossberg (1987b) ART 2: self-organization of stable category recognition codes for analog input patterns. Applied Optics, Vol. 26, 4919–4930.

    Google Scholar 

  • G.A. Carpenter, S. Grossberg and J.H. Reynolds (1991) ARTMAP: Supervised realtime learning and classification of nonstationary data by a self-organizing neural network. Neural Networks, Vol. 4, 565–588.

    Article  Google Scholar 

  • G. Cybenko (1988) Continuous valued neural networks with two hidden layers are sufficient. Technical Report, Department of Computer Science, Tufts University, Medford, MA.

    Google Scholar 

  • G. Cybenko (1989) Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, Vol. 2, 303–314.

    Google Scholar 

  • J.J. Hopfield (1982) Neural networks and physical systems with emergent collective computational properties. Proceedings of the National Academy of Sciences U.S.A., 79, 2554–2558.

    Google Scholar 

  • J.J. Hopfield and D.W. Tank (1985) “Neural” computation of decisions in optimization problems. Biological Cybernetics, 52, 141–152.

    PubMed  Google Scholar 

  • K. Hornik, M. Stinchcombe and H. White (1989) Multilayer feedforward networks are universal approximators. Neural Networks, 2, 359–366.

    Article  Google Scholar 

  • W. James (1890) Principles of Psychology, New York: Holt.

    Google Scholar 

  • T. Kohonen (1984) Self-organization and associative memory, Berlin: Springer-Verlag.

    Google Scholar 

  • B. Kosko (1988) Bidirectional associative memories. IEEE Transactions on Systems, Man, and Cybernetics, SMC-18, 42–60.

    Google Scholar 

  • Marr (1982) Vision. Freeman, San Francisco.

    Google Scholar 

  • J.M.J. Murre (1992) Categorization and Learning in Modular Neural Networks. Hemel Hempstead: Harvester Wheatsheaf

    Google Scholar 

  • J.M.J. Murre, R.H. Phaf and G. Wolters (1992) CALM: Categorizing and Learning Module. Neural Networks, 5, 55–82.

    Article  Google Scholar 

  • C. Peterson (1990) Parallel distributed approaches to combinatorial optimization: Benchmark studies on Traveling Salesman Problem. Neural Computation 2, 261–269.

    Google Scholar 

  • C. Peterson and B. Söderberg (1989) A new method for mapping optimization problems onto neural networks. International Journal of Neural Systems 1, 3–22.

    Article  Google Scholar 

  • F. Rosenblatt (1958) The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 56, 386–408.

    Google Scholar 

  • D.E. Rumelhart, G.E. Hinton and R.J. Williams (1986) Learning representations by back-propagating errors. Nature, 323, 533–536.

    Article  Google Scholar 

  • P.K. Simpson (1990) Artificial Neural Systems: Foundations, Paradigms, Applications, and Implementations. Pergamon Press, New York, NY.

    Google Scholar 

  • H. Szu (1986) Fast simulated annealing. In J. Denker (Ed.), AIP Conference Proceedings 151: Neural Networks for Computing, 420–425. New York: American Institute of Physics.

    Google Scholar 

  • Y. Takefuji and H. Szu (1989) Design of parallel distributed Cauchy Machines. In: Proceedings of the International Joint Conference on Neural Networks, San Diego, CA. Vol. I, 529–532.

    Google Scholar 

  • B. Widrow (1962) Generalization and information storage in networks of adaline “neurons”. In M. Yovits, G. Jacoby, and G. Goldstein (Eds.), Self-Organizing Systems 1962, 435–461. Washington: Spartan Books.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

P. J. Braspenning F. Thuijsman A. J. M. M. Weijters

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Hudson, P.T.W., Postma, E.O. (1995). Choosing and using a neural net. In: Braspenning, P.J., Thuijsman, F., Weijters, A.J.M.M. (eds) Artificial Neural Networks. Lecture Notes in Computer Science, vol 931. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0027034

Download citation

  • DOI: https://doi.org/10.1007/BFb0027034

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59488-8

  • Online ISBN: 978-3-540-49283-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics