Abstract
Most supervised learning algorithms for Artificial Neural Networks (ANN)aim at minimizing the sum of the squared error of the training data [12, 11, 5, 10]. It is well known that learning algorithms that are based only on error minimization do not guarantee good generalization performance models. In addition to the training set error, some other network-related parameters should be adapted in the learning phase in order to control generalization performance. The need for more than a single objective function paves the way for treating the supervised learning problem with multi-objective optimization techniques. Although the learning problem is multi-objective by nature, only recently it has been given a formal multi-objective optimization treatment [16]. The problem has been treated from different points of view along the last two decades.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
B. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. Fifth Annual Workshop on Computational Learning Theory, pages 144–152, 1992.
V. Chankong and Y. Y. Haimes. Multiobjective Decision Making: Theory and Methodology, volume 8. North-Holland (Elsevier), New York, 1983.
C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273–279, 1995.
Yann Le Cun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems 2, pages 598–605, 1990.
S. E. Fahlman. Faster-learning variations on back-propagation: an empirical study. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School, Pittsburg, pages 38–51, San Mateo, CA, 1988. Morgan Kaufmann.
S. R. Gunn. Support vector machines for classification and regression. Technical report, Image Speech and Intelligent Systems Research Group, University of Southampton, 1997.
S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, 1999.
Ehud D. Karnin. A simple procedure for pruning back-propagation trained neural networks. IEEE Transactions on Neural Networks, 1(2):239–242, 1990.
M. C. Mozer and P. Smolensky. Skeletonization: A technique for trimming the fat from a network via relevance assessment. Advances in Neural Information Processing, vol. 1, pages 107–115, 1989.
Gustavo G. Parma, Antonio P. Braga, and Benjamim R. Menezes. Sliding mode algorithm for training multi-layer neural networks. IEE Electronics Letters, 38(1):97–98, January 1998.
Martin Riedmiller and Heinrich Braun. A direct adaptive method for faster back propagation learning: The RPROP algorithm. In Proc. of the IEEE Intl. Conf. on Neural Networks, pages 586–591, San Francisco, CA, April 1993.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323:533–536, 1986.
N. Z. Shor. Cut-off method with space extension in convex programming problems. Cybernetics, 12:94–96, 1977.
R. H. C. Takahashi, P. L. D. Peres, and P. A. V. Ferreira. H2/h-infinity multiobjective pid design. IEEE Control Systems Magazine, 17(5):37–47, June 1997.
V. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995.
R. A. Teixeira, A. P. Braga, R. H. C. Takahashi, and R. R. Saldanha. Improving generalization of mlps with multi-objective optimization. Neurocomputing, 35(1–4):189–194, 2000.
G. A. Hinton. Connectionist learning procedures. Artificial Intelligence, 40:185–234, 1989.
V. Pareto. Cours D'Economie Politique. Rouse, Lausanne, 1896. vols. I and II.
R. A. Teixeira, A. P. Braga, R. H. C. Takahashi, and R. R. Saldanha. Utilização de seção áurea no cálculo de soluçõ efficientes para treinamento de redes neurais artificiais através de otimização multi-objetivo. 8th Brazilian Symposium on Neural Networks, November 2004.
U. Itkis. Control Systems of Variable Structure. Keter Publishing House Jerusalem LTD, 1976.
M. A. Costa, A. P. Braga, B. R. de Menezes, G. G. Parma, and R. A. Teixeira. Training neural networks with a multi-objective sliding mode control algorithm. Neurocomputing, 51:467–473, 2003.
M. A. Costa, A. P. Braga and B. R. de Menezes. Improving neural networks generalization with new constructive and pruning methods. Journal of Intelligent & Fuzzy Systems, 10:1–9, 2003.
C.L. Blake and C.J. Merz. {UCI} Repository of machine learning databases. University of California, Irvine, Dept. of Information and Computer Sciences, http://www.ics.uci.edu/~mlearn/MLRepository. html, 1998.
S. E. Fahlman and C. Lebiere, The cascade-correlation learning architecture, Morgan Kaufmann, In Advances in Neural Information Processing Systems 2 (D. S. Touretzky, Editor), 1990
Jean-Pierre Nadal, Study of a growth algorithm for a feedforward network, International Journal of Neural Systems, 1(1):55–59, 1989.
Ron Kohavi, A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection, http://citeseer.ist.psu.edu/105046.html, 1995.
S. Geman and E. Bienenstock and R. Doursat. Neural Networks and the Bias/Variance Dilemma, Neural Computation, 4(1):1–58, 1992.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer
About this chapter
Cite this chapter
Braga, A.P., Takahashi, R.H., Costa, M.A., Teixeira, R.d. (2006). Multi-Objective Algorithms for Neural Networks Learning. In: Jin, Y. (eds) Multi-Objective Machine Learning. Studies in Computational Intelligence, vol 16. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-33019-4_7
Download citation
DOI: https://doi.org/10.1007/3-540-33019-4_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-30676-4
Online ISBN: 978-3-540-33019-6
eBook Packages: EngineeringEngineering (R0)