Skip to main content

Neural net connection estimates applied for feature selection & improved linear classifier design

  • Neural Nets
  • Conference paper
  • First Online:
Uncertainty and Intelligent Systems (IPMU 1988)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 313))

  • 137 Accesses

Abstract

Estimates of connection strengths are obtained to derive the weights used in building a linear classifier of complex binary patterns. Use of bootstrap resampling methods permits small, large and highly disparate-in-size training sets to be utilized with equal ease. Using ideas suggested by a method proposed for learning in neural net systems [Hopfield 1982] [Cruz-Young et al 1986], it is now possible to obtain connection values without prohibitive or arbitrarily terminated computations. The weight values so derived are identified as equivalent with those at the middle layer of three-layer neural net models. The model serving as the springboard for this study was developed by Kanerva [1986–87], and functions as a distributed sparse memory or DSM. In the various implementations of connection-based memories, both linear and nonlinear relationships within patterns play a role in the classification process. Experimental data used in this research were derived from psychological profiles which result from the coding of responses elicited by question-like items. Traditionally, such items are usually designed and chosen to be linear predictors of class membership or performance. The phase of the study reported here addresses the problems and successes of initial efforts towards a practical application of neural network computational concepts. The main focus is on those approaches which have been found to be particularly effective in obtaining adequate estimates of effective linear functions for classification of binary patterns with a number of hits ranging from 64 to 256.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Kanerva, P. [1987] Self-propagating search: "A unified theory of memory" (Report No. CSLI-84-7) Stanford, CA. Stanford. CA. Stanford University, Center for the Study of Language and Information.

    Google Scholar 

  • Kanerva, P. [1986] Parallel structures in human and computer memory. AIP Conference Proceedings 151, Neural Networks for Computing — Snowbird, Utah 1986. pp 247–258

    Google Scholar 

  • Hopfield J. J. [1982] Neural networks and and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, USA, 79, 2554–2558.

    Google Scholar 

  • Cruz-Young, C. A., W. A. Hanson, J. Y. Tam [1986] Flow-of-activation processing: Parallel Associative Networks (PAN). AIP Conference Proceedings 151, Neural Networks for Computing — Snowbird, Utah 1986. pp 115–120.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

B. Bouchon L. Saitta R. R. Yager

Rights and permissions

Reprints and permissions

Copyright information

© 1988 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Surkan, A.J. (1988). Neural net connection estimates applied for feature selection & improved linear classifier design. In: Bouchon, B., Saitta, L., Yager, R.R. (eds) Uncertainty and Intelligent Systems. IPMU 1988. Lecture Notes in Computer Science, vol 313. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-19402-9_84

Download citation

  • DOI: https://doi.org/10.1007/3-540-19402-9_84

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-19402-6

  • Online ISBN: 978-3-540-39255-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics