Skip to main content

Classification and feature selection by a self-organizing neural network

  • Plasticity Phenomena (Maturing, Learning & Memory)
  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1606))

Abstract

This article describes recent improvements of an original neural network building method which could be applied in the particular case of 2 input neurones. After a brief recall of the main building principles of a neural net, authors introduce the capability for a neurone to receive more than 2 inputs. Two problems then arise: how to chose the input number of a neurone, and what becomes of the decision rule of a neurone? Treating these problems leads to an original feature selection method (based on genetic algorithms) and leads to adapt a linear discrimination algorithm to non separable problems. Experimental results for a handwritten digit recognition problem confirm the efficiency of the method.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  • [BEL61] Bellman R., “Adaptive control processes”, Princeton University Press, Princeton, New Jersey, 1961.

    Book  MATH  Google Scholar 

  • [BEL91] Belew R.K., Mc Inerney J., Schraudolph N.N., “Evolving network: using the genetic algorithm with connectionnist learning”. In Langton C.G., Taylor C., Farmer, J.D., Rasmussen (Eds.), Artificial life II, SFI studies in the sciences of complexity, Vol. X, Addison-Wesley, Reading, pp. 511–547, 1991.

    Google Scholar 

  • [CHE95] Chentouf R., Jutten C., “Ineremental learning with a stopping creterion experimental results”. In J. Mira, F. Sandoval (Eds.), Proceedings of Iwann’95, New-York: Springer-Verlag, New-York: Springer-Verlag, pp. 519–526, 1995.

    Google Scholar 

  • [DEV82] Devijver P.A., Kittler, J., “Pattern recognition: a statistical approach”, Prentice-Hall, London, 1982.

    MATH  Google Scholar 

  • [FRE90] Frean M., “The upstart algorithm: a method for constructing and training feedforward neural networks”, Neural Computation, Vol. 2, pp. 198–209 1990.

    Article  Google Scholar 

  • [GAL86] Gallant S.I., “Optimal linear discriminants”, In IEEE Proceedings of the 8 th Conference on Pattern Recognition Vol. 2 (pp. 849). New-York: IEEE, 1986.

    Google Scholar 

  • [GOL89] Goldberg D.E., “Genetic algorithms in search, optimization, and machine learning”. Reading, Massachusetts: Addison-Wesley, 1989.

    MATH  Google Scholar 

  • [HAS93] Hassibi B., Stork D.G., Wolff G.J., “Optimal brain surgeon”. In Proceedings of the 1993 IEEE International Conference on Neural Networks Vol 1, New-York: IEEE, pp. 293–299. 1993.

    Google Scholar 

  • [HEU96] Heutte L., Moreau J.V., Paquet T., Lecourtier Y., Olivier C., “Combining Structural and statistical features for the recognition of handwritten characters”, 13 th Internat. Conf. on Pattern Recog., Vol. 2, Vienne, pp. 210–214, 1996.

    Article  Google Scholar 

  • [HIR91] Hirose Y., Yamashita K., Hijiya S., “Back-propagation algorithm which varies the number of hidden units”, Neural Networks, Vol. 4, pp. 61–66, 1991.

    Article  Google Scholar 

  • [HO65] Ho Y-C, Kashyap R.L., “An algorithm for linear inequalities and its applications”, IEEE Transactions on Elec. Comp., Vol. 14, pp. 683–688, 1965.

    Article  MATH  Google Scholar 

  • [KNE90] Knerr S., Personnaz, L., Dreyfus G., “Single-layer learning revisited: a stepwise procedure for building and training a neural network”. In F. Fogelman Soulie, J. Herault (Eds.), Neurocomputing, NATO ASI Series, Series F, Vol. 68, New-York: Springer-Verlag, pp. 41–50, 1990.

    Google Scholar 

  • [LEC90] LeCun Y. Denker J.S., Solla S.A., “Optimal Brain Damage”, Procedings of the Neural Information Processing System-2. D.S. Touretzky Ed., Morgan-Kaufmann, pp. 598–605, 1990.

    Google Scholar 

  • [LIS95] Lis J., “The synthesis of the ranked neural networks applying genetic algorithm with the dynamic probability of mutation”. In J. Mira, F. Sandoval (Eds.), Proceedings of Iwann’95, New-York: Springer-Verlag, pp. 498–504, 1995.

    Google Scholar 

  • [MIN69] Minsky M., Papert S. “Perceptrons”, MIT Press, Cambridge (MA), 1969.

    MATH  Google Scholar 

  • [RIB97] Ribert A., Stocker E., Lecourtier Y., Ennaji A. “Optimizing a Neural Network Architecture with an Adaptive Parameter Genetic Algorithm”. In Proceedings of Iwann’97, J. Mira, R. Moreno and J. Cabestany Eds, Springer Verlag, Berlin, Vol. 1240, pp. 527–535, 1997.

    Google Scholar 

  • [RIB98] Ribert A., “Structuration évolutive de données: application à la construction de classifieurs distribués”, Ph.D. Thesis, University of rouen, France, 1998.

    Google Scholar 

  • [ROS60] Rosenblatt F., “Perceptron simulation experiments”, Proceeding of the IRE, 3, 48, 1960.

    Google Scholar 

  • [STO95] Stocker E., Lecourtier Y., Ennaji A., “A distributed classifier based on Yprel networks cooperation”. In Proceedings of Iwann’95, pp. 330–337, 1995.

    Google Scholar 

  • [STO96] Stocker E., Ribert A., Lecourtier Y., Ennaji A., “An incremental distributed classifier building”. In 13th International Conference on Pattern Recognition (ICPR’96) Vol IV, (pp. 128–132). Washington: IEEE Computer Society Press, 1996.

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Juan V. Sánchez-Andrés

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ribert, A., Stocker, E., Ennaji, A., Lecourtier, Y. (1999). Classification and feature selection by a self-organizing neural network. In: Mira, J., Sánchez-Andrés, J.V. (eds) Foundations and Tools for Neural Modeling. IWANN 1999. Lecture Notes in Computer Science, vol 1606. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0098223

Download citation

  • DOI: https://doi.org/10.1007/BFb0098223

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66069-9

  • Online ISBN: 978-3-540-48771-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics