Skip to main content

Multi-Objective Algorithms for Neural Networks Learning

  • Chapter

Part of the book series: Studies in Computational Intelligence ((SCI,volume 16))

Abstract

Most supervised learning algorithms for Artificial Neural Networks (ANN)aim at minimizing the sum of the squared error of the training data [12, 11, 5, 10]. It is well known that learning algorithms that are based only on error minimization do not guarantee good generalization performance models. In addition to the training set error, some other network-related parameters should be adapted in the learning phase in order to control generalization performance. The need for more than a single objective function paves the way for treating the supervised learning problem with multi-objective optimization techniques. Although the learning problem is multi-objective by nature, only recently it has been given a formal multi-objective optimization treatment [16]. The problem has been treated from different points of view along the last two decades.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. B. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. Fifth Annual Workshop on Computational Learning Theory, pages 144–152, 1992.

    Google Scholar 

  2. V. Chankong and Y. Y. Haimes. Multiobjective Decision Making: Theory and Methodology, volume 8. North-Holland (Elsevier), New York, 1983.

    Google Scholar 

  3. C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273–279, 1995.

    MATH  Google Scholar 

  4. Yann Le Cun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems 2, pages 598–605, 1990.

    Google Scholar 

  5. S. E. Fahlman. Faster-learning variations on back-propagation: an empirical study. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School, Pittsburg, pages 38–51, San Mateo, CA, 1988. Morgan Kaufmann.

    Google Scholar 

  6. S. R. Gunn. Support vector machines for classification and regression. Technical report, Image Speech and Intelligent Systems Research Group, University of Southampton, 1997.

    Google Scholar 

  7. S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, 1999.

    Google Scholar 

  8. Ehud D. Karnin. A simple procedure for pruning back-propagation trained neural networks. IEEE Transactions on Neural Networks, 1(2):239–242, 1990.

    Article  Google Scholar 

  9. M. C. Mozer and P. Smolensky. Skeletonization: A technique for trimming the fat from a network via relevance assessment. Advances in Neural Information Processing, vol. 1, pages 107–115, 1989.

    Google Scholar 

  10. Gustavo G. Parma, Antonio P. Braga, and Benjamim R. Menezes. Sliding mode algorithm for training multi-layer neural networks. IEE Electronics Letters, 38(1):97–98, January 1998.

    Article  Google Scholar 

  11. Martin Riedmiller and Heinrich Braun. A direct adaptive method for faster back propagation learning: The RPROP algorithm. In Proc. of the IEEE Intl. Conf. on Neural Networks, pages 586–591, San Francisco, CA, April 1993.

    Google Scholar 

  12. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323:533–536, 1986.

    Article  Google Scholar 

  13. N. Z. Shor. Cut-off method with space extension in convex programming problems. Cybernetics, 12:94–96, 1977.

    Google Scholar 

  14. R. H. C. Takahashi, P. L. D. Peres, and P. A. V. Ferreira. H2/h-infinity multiobjective pid design. IEEE Control Systems Magazine, 17(5):37–47, June 1997.

    Article  Google Scholar 

  15. V. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995.

    Google Scholar 

  16. R. A. Teixeira, A. P. Braga, R. H. C. Takahashi, and R. R. Saldanha. Improving generalization of mlps with multi-objective optimization. Neurocomputing, 35(1–4):189–194, 2000.

    Article  MATH  Google Scholar 

  17. G. A. Hinton. Connectionist learning procedures. Artificial Intelligence, 40:185–234, 1989.

    Article  Google Scholar 

  18. V. Pareto. Cours D'Economie Politique. Rouse, Lausanne, 1896. vols. I and II.

    Google Scholar 

  19. R. A. Teixeira, A. P. Braga, R. H. C. Takahashi, and R. R. Saldanha. Utilização de seção áurea no cálculo de soluçõ efficientes para treinamento de redes neurais artificiais através de otimização multi-objetivo. 8th Brazilian Symposium on Neural Networks, November 2004.

    Google Scholar 

  20. U. Itkis. Control Systems of Variable Structure. Keter Publishing House Jerusalem LTD, 1976.

    Google Scholar 

  21. M. A. Costa, A. P. Braga, B. R. de Menezes, G. G. Parma, and R. A. Teixeira. Training neural networks with a multi-objective sliding mode control algorithm. Neurocomputing, 51:467–473, 2003.

    Article  Google Scholar 

  22. M. A. Costa, A. P. Braga and B. R. de Menezes. Improving neural networks generalization with new constructive and pruning methods. Journal of Intelligent & Fuzzy Systems, 10:1–9, 2003.

    Google Scholar 

  23. C.L. Blake and C.J. Merz. {UCI} Repository of machine learning databases. University of California, Irvine, Dept. of Information and Computer Sciences, http://www.ics.uci.edu/~mlearn/MLRepository. html, 1998.

    Google Scholar 

  24. S. E. Fahlman and C. Lebiere, The cascade-correlation learning architecture, Morgan Kaufmann, In Advances in Neural Information Processing Systems 2 (D. S. Touretzky, Editor), 1990

    Google Scholar 

  25. Jean-Pierre Nadal, Study of a growth algorithm for a feedforward network, International Journal of Neural Systems, 1(1):55–59, 1989.

    Article  MathSciNet  Google Scholar 

  26. Ron Kohavi, A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection, http://citeseer.ist.psu.edu/105046.html, 1995.

    Google Scholar 

  27. S. Geman and E. Bienenstock and R. Doursat. Neural Networks and the Bias/Variance Dilemma, Neural Computation, 4(1):1–58, 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer

About this chapter

Cite this chapter

Braga, A.P., Takahashi, R.H., Costa, M.A., Teixeira, R.d. (2006). Multi-Objective Algorithms for Neural Networks Learning. In: Jin, Y. (eds) Multi-Objective Machine Learning. Studies in Computational Intelligence, vol 16. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-33019-4_7

Download citation

  • DOI: https://doi.org/10.1007/3-540-33019-4_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-30676-4

  • Online ISBN: 978-3-540-33019-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics