Skip to main content

Increased complexity training

  • Conference paper
  • First Online:
New Trends in Neural Computation (IWANN 1993)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 686))

Included in the following conference series:

  • 308 Accesses

Abstract

The training strategy used in connectionist learning has not received much attention in the literature. We suggest a new strategy for backpropagation learning, increased complexity training, and show experimentally that it leads to faster convergence compared to both the conventional training strategy using a fixed set, and to combined subset training. Increased complexity training combined with an incremental increase in the success ratio required on the training set produced even quicker convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Cottrell, G.W., and Tsung, F.S., “Learning Simple Arithmetic Procedures”, High-Level Connectionist Models, eds. J.A. Barnden, J.B. Pollack, in the series Advances in Connectionist and Neural Computation Theory, Vol. 1, pp.305–321, 1991.

    Google Scholar 

  2. Elman, J.L., “Finding Structure in Time”, Cognitive Science, Vol. 14, pp. 179–211, 1990.

    Google Scholar 

  3. Faldman, S.E., “Faster Learning Variations on Back-propagation: An Empirical Study”, Proceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann Publishers, pp. 38–50, 1989.

    Google Scholar 

  4. Hoekstra, J., “Is Counting with Artificial Neural Networks a Problem?”, Proceedings of the 1st IFIP Working Group-10.6 Workshop, INPG, Grenoble, France, pp. 27–30, 2–3 March 1992.

    Google Scholar 

  5. Rumelhart, D.E., Hinton G.E., Williams R.J., “Learning Internal Representations by Error Propagation”, Parallel Distributed Processing: Vol. 1, Foundations, eds. Rumelhart, D.E., McClelland, J.L., Cambridge, MA: MIT Press, 1986.

    Google Scholar 

  6. Silva, F.M., and Almeida, L.B., “Speeding up Backpropagation”, Advanced Neural Computers, ed. R. Eckmiller, Elsevier Science Publishers B.V., North Holland, pp. 151–158, 1990.

    Google Scholar 

  7. Waibel, A., “Consonant Recognition by Modular Construction of Large Phonemic Time-Delay Neural Networks”, Advances in Neural Information Processing Systems 1, ed. D.S. Touretzky, Morgan Kaufmann Publishers, pp. 215–223, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Joan Cabestany Alberto Prieto

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cloete, I., Ludik, J. (1993). Increased complexity training. In: Mira, J., Cabestany, J., Prieto, A. (eds) New Trends in Neural Computation. IWANN 1993. Lecture Notes in Computer Science, vol 686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56798-4_158

Download citation

  • DOI: https://doi.org/10.1007/3-540-56798-4_158

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-56798-1

  • Online ISBN: 978-3-540-47741-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics