Skip to main content

Pareto-Optimal Approaches to Neuro-Ensemble Learning

  • Chapter

Part of the book series: Studies in Computational Intelligence ((SCI,volume 16))

Abstract

The whole is greater than the sum of the parts; this is the essence of using a mixture of classifiers instead of a single classifier. In particular, an ensemble of neural networks (we call neuro-ensemble) has attracted special attention in the machine learning literature. A set of trained neural networks are combined using a post-gate to form a single super-network. The three main challenges facing researchers in neuro-ensemble are:(1) which network to include in, or exclude from the ensemble; (2) how to define the size of the ensemble; (3) how to define diversity within the ensemble.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. H.A. Abbass. A memetic pareto evolutionary approach to artificial neural networks. In M. Stumptner, D. Corbett, and M. Brooks, editors, Proceedings of the 14th Australian Joint Conference on Artificial Intelligence (AI’01), pages 1–12, Berlin, 2001. Springer-Verlag.

    Google Scholar 

  2. H.A. Abbass. An evolutionary artificial neural network approach for breast cancer diagnosis. Artificial Intelligence in Medicine, 25(3):265–281, 2002.

    Article  Google Scholar 

  3. H.A. Abbass. The self-adaptive pareto differential evolution algorithm. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC2002), volume 1, pages 831–836, Piscataway, NJ, 2002. IEEE Press.

    Google Scholar 

  4. H.A. Abbass. Pareto neuro-ensemble. In Proceedings of the 16th Australian Joint Conference on Artificial Intelligence (AI’03), pages 554–566, Berlin, 2003. Springer-Verlag.

    Google Scholar 

  5. H.A. Abbass. Pareto neuro-evolution: Constructing ensemble of neural networks using multi-objective optimization. In Proceedings of the The IEEE Congress on Evolutionary Computation (CEC2003), volume 3, pages 2074–2080. IEEE-Press, 2003.

    Google Scholar 

  6. H.A. Abbass. Speeding up back-propagation using multiobjective evolutionary algorithms. Neural Computation, 15(11):2705–2726, 2003.

    Article  MATH  Google Scholar 

  7. H.A. Abbass and K. Deb. Searching under multi-evolutionary pressures. In C.M. Fonseca, P.J. Fleming, E. Zitzler, K. Deb, and L. Thiele, editors, Proceedings of the Second International Conference on Evolutionary Multi-Criterion Optimization (EMO 2003), number 2632 in Lecture Notes in Computer Science (LNCS), pages 391–404. Springer-Verlag, Berlin, 2003.

    Google Scholar 

  8. H.A. Abbass, R.A. Sarker, and C.S. Newton. PDE: A pareto-frontier differential evolution approach for multi-objective optimization problems. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC2001), volume 2, pages 971–978, Piscataway, NJ, 2001. IEEE Press.

    Google Scholar 

  9. R.E. Banfield, L.O. Hall, K.W. Bowyer, and W.P. Kegelmeyer. A new ensemble diversity measure applied to thinning ensembles. In T. Windeatt and F. Roli, editors, Proceedings of the 4th International Workshop on Multiple Classifier Systems, Lecture Notes in Computer Science, pages 306–316, Guilford, UK, June 2003. Springer-Verlag.

    Google Scholar 

  10. C.L. Blake and C.J. Merz. UCI repository of machine learning databases, http://www.ics.uci.edu/~mlearn/mlrepository.html. University of California, Irvine, Dept. of Information and Computer Sciences, 1998.

    Google Scholar 

  11. G. Brown, X. Yao, J. Wyatt, H. Wersing, and B. Sendhoff. Exploiting ensemble diversity for automatic feature extraction. In L. Wang, J.C. Rajapakse, K. Fukushima, S.Y. Lee, and X. Yao, editors, Proceedings of the 9th International Conference on Neural Information Processing (ICONIP’02), pages 1786–1790, 2002.

    Google Scholar 

  12. A. Chandra and X. Yao. DIVACE: Diverse and accurate ensemble learning algorithm. In Proc. of the Fifth International Conference on Intelligent Data Engineering and Automated Learning (IDEAL’04), pages 619–625. Lecture Notes in Computer Science 3177, Springer, 2004.

    Google Scholar 

  13. A. Chandra and X. Yao. Ensemble learning using multi-objective evolutionary algorithms. Mathematical Modelling and Algorithms, to appear, 2005.

    Google Scholar 

  14. A. Chandra and X. Yao. Evolutionary framework for the construction of diverse hybrid ensembles. In Proc. of the 13th European Symposium on Artificial Neural Networks (ESANN’05), pages 253–258, 2005.

    Google Scholar 

  15. S.-B. Cho and J.-H. Ahn. Speciated neural networks evolved with fitness sharing technique. In Proceedings of the 2001 Congress on Evolutionary Computation, volume 1, pages 390–396, 2001.

    Google Scholar 

  16. S.-B. Cho, J.-H. Ahn, and S.-I. Lee. Exploiting diversity of neural ensembles with speciated evolution. In Proceedings. IJCNN ’01. International Joint Conference on Neural Networks, volume 2, pages 808–813, 2001.

    Google Scholar 

  17. C.A. Coello, D.A. Van Veldhuizen, and G.B. Lamont. Evolutionary Algorithms for Solving Multi-Objective Problems. Kluwer Academic, New York, 2002.

    Google Scholar 

  18. Padraig Cunningham and John Carney. Diversity versus quality in classification ensembles based on feature selection. In 11th European Conference on Machine Learning, volume 1810, pages 109–116, Barcelona, Catalonia, Spain, May 31 -June 2 2000. Springer, Berlin.

    Google Scholar 

  19. K. Deb. Multi-objective optimization using evolutionary algorithms. John Wiley & Sons, New York, 2001.

    MATH  Google Scholar 

  20. T.G. Dietterich. Ensemble methods in machine learning. Lecture Notes in Computer Science, 1857, 2000.

    Google Scholar 

  21. C.M. Fonseca and P.J. Fleming. Genetic algorithms for multiobjective optimization: Formulation, discussion and generalization. Proceedings of the Fifth International Conference on Genetic Algorithms, San Mateo, California, pages 416–423, 1993.

    Google Scholar 

  22. P. Hajela and C.Y. Lin. Genetic search strategies in multicriterion optimal design. Structural Optimization, 4:99–107, 1992.

    Article  Google Scholar 

  23. S. Haykin. Neural networks - a comprehensive foundation. Printice Hall, USA, 2 edition, 1999.

    MATH  Google Scholar 

  24. J. Horn, N. Nafpliotis, and D.E. Goldberg. A niched pareto genetic algorithm for multiobjective optimization. Proceedings of the First IEEE Conference on Evolutionary Computation, 1:82–87, 1994.

    Article  Google Scholar 

  25. C. Igel. Multi-objective model selection for support vector machines. In Proc. of the International conference on evolutionary multi-objective optimization, pages 534–546. LNCS 3410, Springer-Verlag, 2005.

    Google Scholar 

  26. D. Jimenez and N. Walsh. Dynamically weighted ensemble neural networks for classification. In Proceedings of the 1998 International Joint Conference on Neural Networks, pages 753–756, 1998.

    Google Scholar 

  27. Y. Jin, T. Okabe, and B. Sendhoff. Neural network regularization and ensembling using multi-objective evolutionary algorithms. In Proc. of the 2004 Congress on Evolutionary Computation (CEC’2004), volume 1, pages 1–8. IEEE-Press, 200.

    Google Scholar 

  28. Y. Jin, B. Sendhoff, and E. Korner. Evolutionary multi-objective optimization for simultaneous generation of signal-type and symbol-type representations. In Proc. of the International Conference on Evolutionary Multi-Criterion Optimization, pages 752–766. LNCS 3410, Springer-Verlag, 2005.

    Google Scholar 

  29. J. Knowles and D. Corne. The pareto archived evolution strategy: a new baseline algorithm for multiobjective optimization. In 1999 Congress on Evolutionary Computation, Washington D.C., IEEE Service Centre, pages 98–105, 1999.

    Chapter  Google Scholar 

  30. J. Knowles and D. Corne. Approximating the nondominated front using the pareto archived evolution strategy. Evolutionary Computation, 8(2):149–172, 2003.

    Article  Google Scholar 

  31. K. Kottathra and Y. Attikiouzel. A novel multicriteria optimization algorithm for the structure determination of multilayer feedforward neural networks. Journal of Network and Computer Applications, 19:135–147, 1996.

    Article  Google Scholar 

  32. A. Krogh and J. Vedelsby. Neural network ensembles, cross validation and active learning. In G. Tesauro, D.S. Touretzky, and T.K. Len, editors, Advances in Neural Information Processing System 7, volume 7, pages 231–238. MIT Press, 1995.

    Google Scholar 

  33. Anders Krogh and Jesper Vedelsby. Neural network ensembles, cross validation, and active learning. In G. Tesauro, D.S. Touretzky, and T.K. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 231–238. MIT Press, 1995.

    Google Scholar 

  34. M.A. Kupinski and M.A. Anastasio. Multiobjective genetic optimization of diagnostic classifiers with implications for generating receiver operating characteristic curves. IEEE Transactions on Medical Imaging, 18(8):675–685, 1999.

    Article  Google Scholar 

  35. Kuncheva L.I. That elusive diversity in classifier ensembles. In Proc IbPRIA 2003, Mallorca, Spain, 2003, Lecture Notes in Computer Science, pages 1126–1138. Springer-Verlag, 2003.

    Google Scholar 

  36. Y. Liu and X. Yao. Evolving modular neural networks which generalise well. In Proc. of 1997 IEEE International Conference on Evolutionary Computation, pages 605–610, Indianapolis, USA, 1997.

    Google Scholar 

  37. Y. Liu and X. Yao. Ensemble learning via negative correlation. Neural Networks, 12:1399–1404, 1999.

    Article  Google Scholar 

  38. Y. Liu and X. Yao. Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 29(6):716–725, 1999.

    Google Scholar 

  39. Y. Liu, X. Yao, and T. Higuchi. Evolutionary ensembles with negative correlation learning. IEEE Trans. Evolutionary Computation, 4(4):380–387, 2000.

    Article  Google Scholar 

  40. M.H. Luerssen. Graph grammar encoding and evolution of automata networks. In Proc. Twenty-Eighth Australasian Computer Science Conference, pages 229–238. Australian Computer Society, 2005.

    Google Scholar 

  41. R. McKay and H.A. Abbass. Anti-correlation: A diversity promoting mechanisms in ensemble learning. The Australian Journal of Intelligent Information Processing Systems (AJIIPS), 7(3/4):139–149, 2001.

    Google Scholar 

  42. D. Michie, D.J. Spiegelhalter, and C.C. Taylor. Machine learning, neural and statistical classification. Ellis Horwood, 1994.

    Google Scholar 

  43. S. Park, D. Nam, and C.H. Park. Design of a neural controller using multiobjective optimization for nonminimum phase systems. In IEEE International on Fuzzy Systems Conference Proceedings, pages 533–537. IEEE, 1999.

    Google Scholar 

  44. B.E. Rosen. Ensemble learning using decorrelated neural networks. Connection Science. Special Issue on Combining Artificial Neural: Ensemble Approaches, 8(3/4):373–384, 1996.

    Google Scholar 

  45. D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In J.L. McClelland D.E. Rumelhart and The PDP Research Group Eds, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition., Foundations, 1, 318,. MIT Press Cambridge, 1986.

    Google Scholar 

  46. Y. Sawaragi, H. Nakayama, and T. Tanino. Theory of multiobjective optimization, volume 176 of Mathematics in science and engineering. Academic Press Inc, Harcourt Brace Jovanovich, 1985.

    Google Scholar 

  47. J.D. Schaffer. Multiple objective optimization with vector evaluated genetic algorithms. Genetic Algorithms and their Applications: Proceedings of the First International Conference on Genetic Algorithms, pages 93–100, 1985.

    Google Scholar 

  48. A. Sharkey. Combining Artificial Neural Nets. Ensemble and Modular Multi-Net Systems. Springer-Verlag New York, Inc., 1998.

    Google Scholar 

  49. A.J.C. Sharkey. On combining artificial neural nets. Connection Science, 8:299–313, 1996.

    Article  Google Scholar 

  50. N. Srinivas and K. Dev. Multiobjective optimization using nondominated sorting in genetic algorithms. Evolutionary Computation, 2(3):221–248, 1994.

    Google Scholar 

  51. R. Storn and K. Price. Differential evolution: a simple and efficient adaptive scheme for global optimization over continuous spaces. Technical Report TR-95–012, International Computer Science Institute, Berkeley, 1995.

    Google Scholar 

  52. J. Teo and H.A. Abbass. Coordination and synchronization of locomotion in a virtual robot. In L. Wang, J.C. Rajapakse, K. Fukushima, S.Y. Lee, and X. Yao, editors, Proceedings of the 9th International Conference on Neural Information Processing (ICONIP’02), volume 4, pages 1931–1935, Singapore, 2002. Nanyang Technological University, ISBN 981-04-7525-X.

    Chapter  Google Scholar 

  53. J. Teo and H.A. Abbass. Multi-objectivity for brain-behavior evolution of a physically-embodied organism. In R. Standish, M.A. Bedau, and H.A. Abbass, editors, Artificial Life VIII: The 8th International Conference on Artificial Life, pages 312–318. MIT Press, Cambridge, MA, 2002.

    Google Scholar 

  54. J. Teo and H.A. Abbass. Trading-off mind complexity and locomotion in a physically simulated quadruped. In L. Wang, K.C. Tan, T. Furuhashi, J.H. Kim, and X. Yao, editors, Proceedings of the 4th Asia-Pacific Conference on Simulated Evolution And Learning (SEAL’02), volume 2, pages 776–780, Singapore, 2002. Nanyang Technological University, ISBN 981-04-7523-3.

    Google Scholar 

  55. J. Teo and H.A. Abbass. Elucidating the benefits of a self-adaptive Pareto EMO approach for evolving legged locomotion in artificial creatures. In Proceedings of the 2003 Congress on Evolutionary Computation (CEC2003), volume 2, pages 755–762. IEEE Press, Piscataway, NJ, 2003.

    Chapter  Google Scholar 

  56. J. Teo and H.A. Abbass. Automatic generation of controllers for embodied legged organisms: A Pareto evolutionary multi-objective approach. Evolutionary Computation, 12(3):355–394, 2004.

    Article  Google Scholar 

  57. K. Tumer and J. Ghosh. Error correlation and error reduction in ensemble classifiers. Connection Science, 8(3–4):385–403, 1996.

    Article  Google Scholar 

  58. N. Ueda and R. Nakano. Generalization error of ensemble estimators. In IEEE International Conference on Neural Networks, volume 1, pages 90–95, 1996.

    Google Scholar 

  59. Y. Wang and F. Wahl. Multi-objective neural network for image reconstruction. IEE Proceedings - Visions, Image and Signal Processing, 144(4):233–236, 1997.

    Article  Google Scholar 

  60. Z. Wang, X. Yao, and Y. Xu. An improved constructive neural network ensemble approach to medical diagnoses. In Proc. of the Fifth International Conference on Intelligent Data Engineering and Automated Learning (IDEAL’04), pages 572–577. Lecture Notes in Computer Science 3177, Springer, 2004.

    Google Scholar 

  61. S. WIEGAND, C. IGEL, and U. HANDMANN. Evolutionary multi-objective optimization of neural networks for face detection. International Journal of Computational Intelligence and Applications, 4(3):237–253, 2005.

    Article  Google Scholar 

  62. R.B. Xtal. Accuracy bounds for ensembles under 0 - 1 loss. Technical report, Mountain Information Technology Computer Science Department, University of Waikato, New Zealand, June June 24, 2002.

    Google Scholar 

  63. X. Yao. Evolving artificial neural networks. Proceedings of the IEEE, 87:1423–1447, 1999.

    Article  Google Scholar 

  64. X. Yao and Y. Liu. Ensemble structure of evolutionary artificial neural networks. In IEEE International Conference on Evolutionary Computation (ICEC’96), pages 659–664, Nagoya, Japan, 1996.

    Google Scholar 

  65. X. Yao and Y. Liu. A new evolutionary system for evolving artificial neural networks. IEEE Transactions on Neural Networks, 8:694–713, 1997.

    Article  Google Scholar 

  66. X. Yao and Y. Liu. Making use of population information in evolutionary artificial neural networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 28:417–425, 1998.

    Google Scholar 

  67. G. Zenobi and P. Cunningham. Using diversity in preparing ensemble of classifiers based on different subsets to minimize generalization error. In Lecture Notes in Computer Science, volume 2167, pages 576–587, 2001.

    Google Scholar 

  68. E. Zitzler and L. Thiele. Multiobjective evolutionary algorithms: A comparative case study and the strength pareto approach. IEEE Transactions on Evolutionary Computation, 3(4):257–271, 1999.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer

About this chapter

Cite this chapter

Abbass, H. (2006). Pareto-Optimal Approaches to Neuro-Ensemble Learning. In: Jin, Y. (eds) Multi-Objective Machine Learning. Studies in Computational Intelligence, vol 16. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-33019-4_18

Download citation

  • DOI: https://doi.org/10.1007/3-540-33019-4_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-30676-4

  • Online ISBN: 978-3-540-33019-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics