Skip to main content

Individual evolutionary algorithm and its application to learning of nearest neighbor based MLP

  • Learning
  • Conference paper
  • First Online:
From Natural to Artificial Neural Computation (IWANN 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 930))

Included in the following conference series:

Abstract

A society S(I, T) is defined as a system consisting of an individual set I and a task set T. This paper studies the problem to find an efficient S such that all tasks in T can be fulfilled using the smallest I. The individual evolutionary algorithm (IEA) is proposed to solve this problem. By IEA, each individual finds and adapts itself to a class of tasks through evolution, and an efficient S can be obtained automatically. The IEA consists of four operations: competition, gain, loss and retraining. Competition tests the performance of the recent I and the fitness of each individual; gain increases the performance of I by adding new individuals; loss makes I more compact by removing individuals with very low fitness; and individuals are adjusted by retraining to make them better. An evolution cycle is: competition ∨ (gainloss) ∧ retraining, and the evolution is performed cycle after cycle until some criterion is satisfied. The performance of IEA is verified by applying it to the learning of nearest neighbor based multilayer perceptrons.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. X. Yao, “Evolutionary artificial neural networks,” International Journal of Neural Systems, Vol. 4, No. 3, 203–222, Sept. 1993.

    PubMed  Google Scholar 

  2. D. B. Fogel, “An introduction to simulated evolutionary optimization,” IEEE Trans. on Neural Networks, Vol. 5, No. 1, pp. 3–14, Jan. 1994.

    Google Scholar 

  3. T. M. Cover and P. E. Hart, “Nearest neighbor pattern classification,” IEEE Trans. on Information Theory, Vol. IT-13, No. 1, pp. 21–27, Jan. 1967.

    Google Scholar 

  4. Q. F. Zhao and T. Higuchi, “A study on the determination of MLP structures,” Proc. IEICE Karuizawa Workshop, pp. 121–126, Karuizawa, Japan, April 1994.

    Google Scholar 

  5. Q. F. Zhao, “Neural network realization of the nearest neighbor methods,” Technical Report of IEICE, NC93-65, pp. 83–87, Dec. 1993.

    Google Scholar 

  6. Q. F. Zhao and T. Higuchi, “Efficient learning of nearest neighbor multilayer perceptrons.” Proc. International Conference on Fuzzy Logic, Neural Nets and Soft Computing, pp. 77–78, Iizuka, Japan, Aug. 1994.

    Google Scholar 

  7. Q. F. Zhao and T. Higuchi, “Supervised organization of nearest neighbor MLP,” Proc. International Conference on Neural Information Processing, pp. 1398–1403, Seoul, Korea, Oct. 1994.

    Google Scholar 

  8. O. J. Murphy, “Nearest neighbor pattern classification perceptrons,” Proc. IEEE, Vol. 78, No. 10, pp. 1595–1598, Oct. 1990.

    Google Scholar 

  9. N. K. Bose and A. K. Garga, “Neural network design using Voronoi diagrams,” IEEE Trans. on Neural Networks, Vol. 4, No. 5, pp. 778–787, Sept. 1993.

    Google Scholar 

  10. T. Kohonen, “Self-organized formation of topologically correct feature maps,” Biolog. Cybern., Vol. 43, pp. 59–69, 1982.

    Google Scholar 

  11. T. Kohonen, “The self-organizing map,” Proc. IEEE, Vol. 78, No. 9, pp. 1464–1480, Sept. 1990.

    Google Scholar 

  12. J. A. Kangas, T. Kohonen and J. T. Laaksonen, “Variants of self-organizing maps,” IEEE Trans. on Neural Networks, Vol. 1, No. 1, pp. 93–99, Mar. 1990.

    Google Scholar 

  13. B. Kosko, “Unsupervised learning in noise,” IEEE Trans. on Neural Networks, Vol. 1, No. 1, pp. 44–57, Mar. 1990.

    Google Scholar 

  14. B. Kosko, “Stochastic competitive learning,” IEEE Trans. on Neural Networks, Vol. 2, No. 5, pp. 522–529, Sept. 1991.

    Google Scholar 

  15. G. A. Carpenter and S. Grossberg, “ART 2: self-organization of stable category recognition codes for analog input patterns,” Applied Optics, Vol. 26, No. 23, pp. 4919–4930, Dec. 1987.

    Google Scholar 

  16. G. A. Carpenter and S. Grossberg, “The ART of adaptive pattern recognition by a self-organizing neural network,” IEEE Computer, Vol. 21, No. 3, pp. 77–88, Mar. 1988.

    Google Scholar 

  17. S. Geva and J. Sitte, “Adaptive nearest neighbor pattern classification,” IEEE Trans. on Neural Networks, Vol. 2, No.2, pp. 318–322, Mar. 1991.

    Google Scholar 

  18. D. L. Reilly, L. N. Cooper and C. Elbaum, “A neural model for category learning,” Biol. Cybern. 45, pp. 35–41, 1982.

    PubMed  Google Scholar 

  19. Y. Okamoto, “Neural network model for real time adaptation to rapidly changing environment,” IEICE Trans., Vol. J73-D-II, No. 8, pp. 1186–1191, Aug. 1990.

    Google Scholar 

  20. Y. Linde, A. Buzo and R. M. Gray, “An algorithm for vector quantizer design,” IEEE Trans. on Communication, Vol. COM-28, No. 1, pp. 84–95, Jan. 1980.

    Google Scholar 

  21. P. E. Hart, “The condensed nearest neighbor rule,” IEEE Trans. on Information Theory, Vol. 14, No. 5, pp. 515–516, May 1968.

    Google Scholar 

  22. G. W. Gates, “The reduced nearest neighbor rule,” IEEE Trans. on Information Theory, Vol. 18, No. 5, pp. 431–433, May 1972.

    Google Scholar 

  23. Q. B. Xie, C. A. Laszlo and R. K. Ward, “Vector quantization technique for nonparametric classifier design,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 15. No. 12. pp. 1326–1330. Dec. 1993.

    Google Scholar 

  24. C. L. Chang, “Finding prototypes for nearest neighbor classifiers,” IEEE Trans. on Computers Vol. 23 No 11, pp. 1179–1184, Nov. 1974.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Francisco Sandoval

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhao, Q., Higuchi, T. (1995). Individual evolutionary algorithm and its application to learning of nearest neighbor based MLP. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_201

Download citation

  • DOI: https://doi.org/10.1007/3-540-59497-3_201

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59497-0

  • Online ISBN: 978-3-540-49288-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics