Skip to main content

Nonstandard Criteria in Evolutionary Learning

  • Reference work entry
  • First Online:
Book cover Encyclopedia of Machine Learning and Data Mining
  • 32 Accesses

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 699.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 949.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Recommended Reading

  • Amit Y, Geman D, Wilder K (1997) Joint induction of shape features and tree classifiers. IEEE Trans Pattern Anal Mach Intell 19(11):1300–1305

    Article  Google Scholar 

  • Banzhaf W, Langdon WB (2002) Some considerations on the reason for bloat. Genet Progr Evolvable Mach 3(1):81–91

    Article  MATH  Google Scholar 

  • ben-David S, von Luxburg U, Shawe-Taylor J, Tishby N (eds) (2005) Theoretical foundations of clustering. In: NIPS workshop

    Google Scholar 

  • bishop C (2006) Pattern recognition and machine learning. Springer, New York

    Google Scholar 

  • Blickle T (1996) evolving compact solutions in genetic programming: a case study. In: Voigt H-M et al (eds) Proceedings of the 4th international inference on parallel problem solving from nature. Lecture notes in computer science, vol 1141. Springer, Berlin, pp 564–573

    Google Scholar 

  • boser B, Guyon I, Vapnik V (1992) A training algorithm for optimal margin classifiers. In: Proceedings of the 5th annual ACM conference on computational learning theory (COLT’92), Pittsburgh, pp 144–152

    Google Scholar 

  • Breiman L (1998) Arcing classifiers. Ann Stat 26(3):801–845

    Article  MathSciNet  MATH  Google Scholar 

  • Breiman L (2001) Random forests. Mach Learn 45(1):5–32

    Article  MATH  Google Scholar 

  • Chandra A, Yao X (2006a) Ensemble learning using multi-objective evolutionary algorithms. J Math Model Algorithm 5(4):417–425

    Article  MathSciNet  MATH  Google Scholar 

  • Chandra A, Yao X (2006b) Evolving hybrid ensembles of learning machines for better generalisation. Neurocomputing 69:686–700

    Article  Google Scholar 

  • Cortes C, Vapnik VN (1995) Support-vector networks. Mach Learn 20:273–297

    MATH  Google Scholar 

  • Cortes C, Mohri M (2004) Confidence intervals for the area under the ROC curve. Adv Neural Inf Process Syst NIPS 17

    Google Scholar 

  • Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Hand DJ (2009) Measuring classifier performance: a coherent alternative to the area under the ROC curve. Mach Learn 77(1):103–123. http://dx.doi.org/10.1007/S10994-009-5119-5. DBLP http://dblp.uni-trier.de

  • Deb K, Agrawal S, Pratab A, Meyarivan T (2000) A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In: Schoenauer M et al (eds) Proceedings of the parallel problem solving from nature VI conference, Paris. Lecture notes in computer science, vol 1917. Springer, pp 849–858

    Google Scholar 

  • Dietterich TG (1998) Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput 10:1895–1923

    Article  Google Scholar 

  • Dietterich T (2000) Ensemble methods in machine learning. In: Kittler J, Roli F (eds) First international workshop on multiple classifier systems. Springer, Berlin, pp 1–15

    Chapter  Google Scholar 

  • Domingos P (1999) Meta-cost: a general method for making classifiers cost sensitive. In: Proceedings of the 5th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, San Diego, pp 155–164

    Google Scholar 

  • Duda RO, Hart PE, Stork DG (2001) Pattern classification, 2nd ed. Wiley, New York

    MATH  Google Scholar 

  • Ferri C, Flach PA, Hernndez-Orallo J (2002) Learning decision trees using the area under the ROC curve. In: Sammut C, Hoffman AG (eds) Proceedings of the nineteenth international conference on machine learning (ICML 2002). Morgan Kaufmann, pp 179–186

    Google Scholar 

  • Fogel DB, Wasson EC, Boughton EM, Porto VW, Angeline PJ (1998) Linear and neural models for classifying breast cancer. IEEE Trans Med Imag 17(3):485–488

    Article  Google Scholar 

  • Freund Y, Shapire RE (1996) Experiments with a new boosting algorithm. In: Saitta L (ed) Proceedings of the thirteenth international conference on machine learning (ICML 1996). Morgan Kaufmann, Bari, pp 148–156

    Google Scholar 

  • Friedrichs F, Igel C (2005) Evolutionary tuning of multiple SVM parameters. Neurocomputing 64(C):107–117

    Article  Google Scholar 

  • Gagné C, Schoenauer M, Sebag M, Tomassini M (2006) Genetic programming for kernel-based learning with co-evolving subsets selection. In: Runarsson TP, Beyer H-G, Burke EK, Merelo Guervós JJ, Whitley LD, Yao X (eds) Parallel problem solving from nature – PPSN IX. Lecture notes in computer science, vol 4193, pp 1008–1017. Springer

    Google Scholar 

  • Gagné C, Sebag M, Schoenauer M, Tomassini M (2007) Ensemble learning for free with evolutionary algorithms? In: Lipson H (ed) Genetic and evolutionary computation conference (GECCO 2007). ACM, pp 1782–1789

    Google Scholar 

  • Gathercole C, Ross P (1994) Dynamic training subset selection for supervised learning in genetic programming. In: Parallel problem solving from nature – PPSN III. Lecture notes in computer science, vol 866. Springer, pp 312–321

    Google Scholar 

  • Gelly S, Teytaud O, Bredeche N, Schoenauer M (2006) Universal consistency and bloat in GP: some theoretical considerations about genetic programming from a statistical learning theory viewpoint. Revue d’Intell Artif 20(6):805–827

    Article  Google Scholar 

  • Gilad-Bachrach R, Navot A, Tishby N (2004) Margin based feature selection – theory and algorithms. In: Proceedings of the twenty-first international conference on machine learning (ICML 2009), Montreal. ACM Press, p 43

    Google Scholar 

  • Han J, Kamber M (2000) Data mining: concepts and techniques. Morgan Kaufmann, New York

    MATH  Google Scholar 

  • Hastie T, Rosset S, Tibshirani R, Zhu J (2004) The entire regularization path for the support vector machine. Adv Neural Inf Process Syst NIPS 17

    Google Scholar 

  • Heidrich-Meisner V, Igel C (2009) Hoeffding and Bernstein races for selecting policies in evolutionary direct policy search. Proceedings of the twenty-sixth international conference on machine learning (ICML 2009), Montreal. ACM, pp 401–408

    Google Scholar 

  • Hillis WD (1990) Co-evolving parasites improve simulated evolution as an optimization procedure. Phys D 42:228–234

    Article  Google Scholar 

  • Holland J (1986) Escaping brittleness: the possibilities of general purpose learning algorithms applied to parallel rule-based systems. In: Michalski RS, Carbonell JG, Mitchell TM (eds) Machine learning: an artificial intelligence approach, vol 2. Morgan Kaufmann, Los Altos, pp 593–623

    Google Scholar 

  • Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor

    Google Scholar 

  • Holte RC (1993) Very simple classification rules perform well on most commonly used datasets. Mach Learn 11:63–90

    Article  MATH  Google Scholar 

  • Monirul Islam M, Yao X (2008) Evolving artificial neural network ensembles. In: Fulcher J, Jain LC (eds) Computational intelligence: a compendium. Studies in computational intelligence, vol 115. Springer, pp 851–880

    Google Scholar 

  • Joachims T (2005) A support vector method for multivariate performance measures. In: De Raedt L, Wrobel S (eds) Proceedings of the twenty-second international conference on machine learning (ICML 2009), Montreal. ACM international conference proceeding series, vol 119. ACM, pp 377–384

    Google Scholar 

  • Jong K, Marchiori E, Sebag M (2004) Ensemble learning with evolutionary computation: application to feature ranking. In: Yao X et al (eds) Parallel problem solving from nature – PPSN VIII. Lecture notes in computer science, vol 3242. Springer, pp 1133–1142

    Google Scholar 

  • Miikkulainen R, Stanley KO, Bryant BD (2003) Evolving adaptive neural networks with and without adaptive synapses. Evol Comput 4:2557–2564

    Google Scholar 

  • Krawiec K, Bhanu B (2007) Visual learning by evolutionary and coevolutionary feature synthesis. IEEE Trans Evol Comput 11(5):635–650

    Article  Google Scholar 

  • Lippmann R, Haines JW, Fried DJ, Korba J, Das K (2000) Analysis and results of the 1999 DARPA on-line intrusion detection evaluation. In: Debar H, Mé L, Wu SF (eds) Recent advances in intrusion detection. Lecture notes in computer science, vol 1907. Springer, Berlin, pp 162–182

    Chapter  Google Scholar 

  • Liu Y, Yao X, Higuchi T (2000) Evolutionary ensembles with negative correlation learning. IEEE Trans Evol Comput 4(4):380–387

    Article  Google Scholar 

  • Llorà X, Sastry K, Goldberg DE, Gupta A, Lakshmi L (2005) Combating user fatigue in IGAS: partial ordering, support vector machines, and synthetic fitness. In: Beyer H-G, O’Reilly U-M (eds) Genetic and evolutionary computation conference (GECCO 05). ACM, New York, pp 1363–1370

    Google Scholar 

  • Margineantu D, Dietterich TG (1997) Pruning adaptive boosting. In: Proceedings of the fourteenth international conference on machine learning (ICML 1996), Bari. Morgan Kaufmann, pp 211–218

    Google Scholar 

  • Mierswa I (2006) Evolutionary learning with kernels: a generic solution for large margin problems. In: Cattolico M (ed) Genetic and evolutionary computation conference (GECCO 06). ACM, New York, pp 1553–1560

    Chapter  Google Scholar 

  • Mierswa I (2007) Controlling overfitting with multi-objective support vector machines. In: Lipson H (ed) Genetic and evolutionary computation conference (GECCO 07), Philadelphia, pp 1830–1837

    Google Scholar 

  • Mozer MC, Dodier R, Colagrosso MC, Guerra-Salcedo C, Wolniewicz R (2001) Prodding the ROC curve: constrained optimization of classifier performance. Adv Neural Inf Process Syst NIPS. MIT Press

    Google Scholar 

  • Platt J (1999) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B et al (eds) Advances in kernel methods – support vector learning. Morgan Kaufmann

    Google Scholar 

  • Poli R (2008) Genetic programming theory. In: Ryan C, Keijzer M (eds) Genetic and evolutionary computation conference (GECCO 2008), Atlanta (Companion). ACM, pp 2559–2588

    Google Scholar 

  • Rosset S (2004) Model selection via the AUC. In: Proceedings of the twenty-first international conference on machine learning (ICML 2009), Montreal. ACM international conference proceeding series, vol 69. ACM

    Google Scholar 

  • Rumelhart DE, McClelland JL (1990) Parallel distributed processing. MIT Press, Cambridge

    Google Scholar 

  • Schapire RE (1990) The strength of weak learnability. Mach Learn 5:197

    Google Scholar 

  • Schoenauer M, Xanthakis S (1993) Constrained GA optimization. In: Forrest S (ed) Proceedings of the 5th international conference on genetic algorithms. Morgan Kaufmann, San Mateo, pp 573–580

    Google Scholar 

  • Schölkopf B, Burges C, Smola A (1998) Advances in Kernel methods: support vector machines. MIT Press, Cambridge

    MATH  Google Scholar 

  • Song D, Heywood MI, Nur Zincir-heywood A (2003) A linear genetic programming approach to intrusion detection. In: Proceedings of the genetic and evolutionary computation conference (GECCO). Lecture notes in computer science, vol 2724. Springer, Berlin/New York, pp 2325–2336

    Google Scholar 

  • Song D, Heywood MI, Nur Zincir-Heywood A (2005) Training genetic programming on half a million patterns: an example from anomaly detection. IEEE Trans Evol Comput 9(3):225–239

    Article  Google Scholar 

  • Sonnenburg S, Franc V, Yom-Tov E, Sebag M (eds) (2008) Large scale machine learning challenge. In: ICML workshop, Helsinki

    Google Scholar 

  • Sutton RS, Barto AG (1998) Reinforcement learning. MIT Press, Cambridge

    Google Scholar 

  • Suttorp T, Igel C (2006) Multi-objective optimization of support vector machines. In: Jin Y (ed) Multi-objective machine learning. Studies in computational intelligence, vol 16. Springer, Berlin, pp 199–220

    Chapter  Google Scholar 

  • Tibshirani R (1996) Regression shrinkage and selection via the lasso. R Stat Soc B 58(1):267–288

    MathSciNet  MATH  Google Scholar 

  • Vapnik VN (1995) The nature of statistical learning. Springer, New York

    Book  MATH  Google Scholar 

  • Venturini G, Slimane M, Morin F, Asselin de Beauville JP (1997) On using interactive genetic algorithms for knowledge discovery in databases. In: Bäck Th (ed) International conference on genetic algorithms (ICGA). Morgan Kaufmann, pp 696–703

    Google Scholar 

  • Zhang T (2003) Leave-one-out bounds for kernel methods. Neural Comput 15(6):1397–1437

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Science+Business Media New York

About this entry

Cite this entry

Sebag, M. (2017). Nonstandard Criteria in Evolutionary Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_599

Download citation

Publish with us

Policies and ethics