Skip to main content

Realization of subjective correspondence in artificial neural network trained by Fahlman and Lebiere's learning algorithm

  • Conference paper
  • First Online:
  • 306 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 686))

Abstract

In realizing surjective correspondence as an incremental learning by feedforward neural network system, it is desirable that the network designer be able to make use of hidden outputs in the alreadytrained network realizing injective correspondence in order to be adapted to a changeable environment that may demand learning of newly added patterns into the same category. To design a system that performs an extended task without destroying the hidden outputs gained by the previously trained network, some new hidden units are incorporated to acquire additional information required to realize the newly defined task. Fahlman and Lebiere's (FL) learning algorithm is particularly suitable for this purpose since this algorithm can gradually add the required number of new hidden units. Previous studies show that FL network generalizes far better than Backpropagation (BP) network [10], [11]. And it has also been reported that an extended FL network which realizes an incremental learning with increased category have generalization ability superior to BP network [12], [13]. In this paper, we describe a realization of surjective correspondence as an incremental learning by FL algorithm. Investigation shows that FL network trained surjective correspondence has better generalization ability than BP network due to the attainment of well-saturated hidden outputs in FL network.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. E. Rumelhart, G. E. Hinton and R. J. Williams, “Learning Internal Representations by Error Propagation”, in Parallel Distributed Processing. D. E. Rumelhart and J. L. McClelend, Eds. Cambridge, MA: MIT Press, vol. 1, pp. 318–362, 1986.

    Google Scholar 

  2. S. I. Gallant, “Perceptron-Based Learning Algorithms”, IEEE Trans. Neural Networks, vol. 1, no. 2, pp. 179–191, 1990.

    Google Scholar 

  3. M. Mezard and J.-P. Nadal, “Learning in Feedforward Neural Networks: the Tiling Algorithm”, J. Phys., A: Math. Gen. 22, pp. 2191–2203, 1989.

    Google Scholar 

  4. M. Frean, “The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks”, Neural Computation, vol. 2, pp. 198–209, 1990.

    Google Scholar 

  5. O. Fujita, “Optimization of Hidden Unit's Function for Feed-forward Neural Networks”, IEICE Tech. Rep., NC90-75, pp. 43–48, 1991 (in Japanese).

    Google Scholar 

  6. S. J. Hanson, “Meiosis Networks”, in Advances in Neural Information Processing Systems, D. S. Touretzky, Ed. Los Altos, CA: Morgan Kaufmann, vol. 2, pp. 533–541, 1990.

    Google Scholar 

  7. S. E. Fahlman and C. Lebiere, “The Cascade-Correlation Learning Architecture”, in Advances in Neural Information Processing Systems, D. S. Touretzky, Ed. Los Altos, CA: Morgan Kaufmann, vol. 2, pp. 524–532, 1990.

    Google Scholar 

  8. E. Littmann and H. Ritter, “Cascade Network Architectures”, in Proc. of IEEE/INNS Int. Joint Conf. Neural Networks, Baltimore, vol. II, pp. 398–404, 1992.

    Google Scholar 

  9. E. Littmann and H. Ritter, “Cascade LLM Networks”, in Proc. of Int. Conf. Artificial Neural Networks, Brighton, vol. 1, pp. 253–257, 1992.

    Google Scholar 

  10. M. Hamamoto, J. Kamruzzaman and Y. Kumagai, “Generalization Ability of Artificial Neural Network Using Fahlman and Lebiere's Learning Algorithm”, in Proc. of IEEE/INNS Int. Joint Conf. Neural Networks, Baltimore, vol. I, pp. 613–618, 1992.

    Google Scholar 

  11. M. Hamamoto, J. Kamruzzaman and Y. Kumagai, “A Study on Generalization Properties of Artificial Neural Network Using Fahlman and Lebiere's Learning Algorithm”, in Artificial Neural Networks, 2, I. Aleksander and J. Taylor, Eds. Amsterdam: North-Holland, vol. 2, pp. 1067–1070, 1992.

    Google Scholar 

  12. M. Hamamoto, J. Kamruzzaman, Y. Kumagai, “Network Synthesis and Generalization Properties of Artificial Neural Network Using Fahlman and Lebiere's Learning Algorithm”, to be appear in Proc. of 35th Midwest Symposium on Circuits and Systems, Washington, D. C., 1992.

    Google Scholar 

  13. M. Hamamoto, J. Kamruzzaman, Y. Kumagai and H. Hikita, “Incremental Learning and Generalization Ability of Artificial Neural Network Trained by Fahlman and Lebiere's Learning Algorithm”, to be published in IEICE Trans. Fundamentals of Electronics, Communications and Computer Sciences.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Joan Cabestany Alberto Prieto

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hamamoto, M., Ito, K., Kamruzzaman, J., Kumagai, Y. (1993). Realization of subjective correspondence in artificial neural network trained by Fahlman and Lebiere's learning algorithm. In: Mira, J., Cabestany, J., Prieto, A. (eds) New Trends in Neural Computation. IWANN 1993. Lecture Notes in Computer Science, vol 686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56798-4_154

Download citation

  • DOI: https://doi.org/10.1007/3-540-56798-4_154

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-56798-1

  • Online ISBN: 978-3-540-47741-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics