Skip to main content

On simultaneous weight and architecture learning

  • Plasticity Phenomena (Maturing, Learning and Memory)
  • Conference paper
  • First Online:
Biological and Artificial Computation: From Neuroscience to Technology (IWANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1240))

Included in the following conference series:

  • 81 Accesses

Abstract

Neural network learning is most often understood in the sense of automatic parameter adaptation. Connection strengths between units are typically updated after successive presentations of exemplar data so that the system is able to generalize to previously unseen cases the underlying function or pattern classification rule. Despite the impact of other operational, architectural and analysis aspects, only a minority of the algorithms following this inductive approach focus on parameters others than synaptic weights. In this paper we discuss a pruning method to automatically determine not only the weights but also the topology of a class of learning systems. A procedure to adapt dynamically the pruning strength is also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Chauvin, Y., “Dynamic behavior of constrained back-propagation networks”, in Advances in Neural Information Processing (2), Touretzky, D.S., Ed., pp. 642–649, 1990.

    Google Scholar 

  • Jacobs, R.A, “Increased rates of convergence through learning rate adaptation”, Neural Networks, vol. 1, pp. 295–307, 1988.

    Google Scholar 

  • Karnin, E.D., “A simple procedure for pruning back-propagation trained neural networks”, IEEE Trans. Neural Networks, vol. 1, no. 2, pp. 239–242, 1990.

    Google Scholar 

  • KrishnaKumar, K., “Optimization of the neural net connectivity pattern using a back-propagation algorithm”, Neurocomputing, vol. 5, no. 6, pp. 273–286, 1993.

    Google Scholar 

  • Kwok, T-Y. and Yeung, D-Y., “Constructive feedforward neural networks for regression problems: A survey”, Tech. Rep. HKUST-CS95-43, The Hong Kong Univ. of Science & Technology, 1995.

    Google Scholar 

  • Le Cun, Y., Denker, J.S. and Solla, S.A., “Optimal brain damage”, in Advances in Neural Information Processing (2), Touretzky, D.S., Ed., pp. 598–605, 1990.

    Google Scholar 

  • Mozer, M.C. and Smolensky, P., “Skeletonization: A technique for trimming the fat from a network via relevance assessment”, inAdvances in Neural Information Processing (1), Touretzky, D.S., Ed., pp. 107–115, 1989.

    Google Scholar 

  • Plaut, D.C., Nowlan, S.J. and Hinton, G.E., “Experiments on learning by back propagation”, Tech. Rep. CMU-CS-86-126, Carnegie Mellon Univ., 1986.

    Google Scholar 

  • Prechelt, L., “Adaptive parameter pruning in neural networks”, Tech. Rep. 95–009, International Computer Science Institute, Berkeley, CA, 1995.

    Google Scholar 

  • Reed, R., “Pruning algorithms-A survey”, IEEE Trans. Neural Networks, vol. 4, no. 5, pp. 740–747, 1993.

    Google Scholar 

  • Riedmiller, M. and Braun, H., “A direct adaptive method for faster backpropagation learning: The RPROP algorithm”, in Proc. of the IEEE International Conference on Neural Networks., pp. 586–591, 1993.

    Google Scholar 

  • Tollenaere, T., “SuperSAB: Fast adaptive backpropagation with good scaling properties”, Neural Networks, vol. 3, pp. 561–573, 1990.

    Google Scholar 

  • Weigend, A.S., Rumelhart, D.E. and Huberman, B.A., “Back-propagation, weight-elimination and time series prediction”, in Proc. 1990 Connectionist Models Summer School, Touretzky, D., Elman, J., Sejnowski, T. and Hinton, J., Eds., pp. 105–116, 1990.

    Google Scholar 

  • Williams, R. and Zipser, D., “A learning algorithm for continually running fully recurrent neural networks”, Neural Computation, vol. 1, no. 2, pp. 270–280, 1989.

    Google Scholar 

  • Wynne-Jones, M., “Node splitting: A constructive algorithm for feedforward neural networks”, NeuralComputing & Applications, vol. 1 no. 1,pp. 17–22, 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Roberto Moreno-Díaz Joan Cabestany

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rementeria, S., Olabe, X. (1997). On simultaneous weight and architecture learning. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032509

Download citation

  • DOI: https://doi.org/10.1007/BFb0032509

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63047-0

  • Online ISBN: 978-3-540-69074-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics