Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Recommended Reading
Amit Y, Geman D, Wilder K (1997) Joint induction of shape features and tree classifiers. IEEE Trans Pattern Anal Mach Intell 19(11):1300–1305
Banzhaf W, Langdon WB (2002) Some considerations on the reason for bloat. Genet Progr Evolvable Mach 3(1):81–91
ben-David S, von Luxburg U, Shawe-Taylor J, Tishby N (eds) (2005) Theoretical foundations of clustering. In: NIPS workshop
bishop C (2006) Pattern recognition and machine learning. Springer, New York
Blickle T (1996) evolving compact solutions in genetic programming: a case study. In: Voigt H-M et al (eds) Proceedings of the 4th international inference on parallel problem solving from nature. Lecture notes in computer science, vol 1141. Springer, Berlin, pp 564–573
boser B, Guyon I, Vapnik V (1992) A training algorithm for optimal margin classifiers. In: Proceedings of the 5th annual ACM conference on computational learning theory (COLT’92), Pittsburgh, pp 144–152
Breiman L (1998) Arcing classifiers. Ann Stat 26(3):801–845
Breiman L (2001) Random forests. Mach Learn 45(1):5–32
Chandra A, Yao X (2006a) Ensemble learning using multi-objective evolutionary algorithms. J Math Model Algorithm 5(4):417–425
Chandra A, Yao X (2006b) Evolving hybrid ensembles of learning machines for better generalisation. Neurocomputing 69:686–700
Cortes C, Vapnik VN (1995) Support-vector networks. Mach Learn 20:273–297
Cortes C, Mohri M (2004) Confidence intervals for the area under the ROC curve. Adv Neural Inf Process Syst NIPS 17
Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, Cambridge
Hand DJ (2009) Measuring classifier performance: a coherent alternative to the area under the ROC curve. Mach Learn 77(1):103–123. http://dx.doi.org/10.1007/S10994-009-5119-5. DBLP http://dblp.uni-trier.de
Deb K, Agrawal S, Pratab A, Meyarivan T (2000) A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In: Schoenauer M et al (eds) Proceedings of the parallel problem solving from nature VI conference, Paris. Lecture notes in computer science, vol 1917. Springer, pp 849–858
Dietterich TG (1998) Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput 10:1895–1923
Dietterich T (2000) Ensemble methods in machine learning. In: Kittler J, Roli F (eds) First international workshop on multiple classifier systems. Springer, Berlin, pp 1–15
Domingos P (1999) Meta-cost: a general method for making classifiers cost sensitive. In: Proceedings of the 5th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, San Diego, pp 155–164
Duda RO, Hart PE, Stork DG (2001) Pattern classification, 2nd ed. Wiley, New York
Ferri C, Flach PA, Hernndez-Orallo J (2002) Learning decision trees using the area under the ROC curve. In: Sammut C, Hoffman AG (eds) Proceedings of the nineteenth international conference on machine learning (ICML 2002). Morgan Kaufmann, pp 179–186
Fogel DB, Wasson EC, Boughton EM, Porto VW, Angeline PJ (1998) Linear and neural models for classifying breast cancer. IEEE Trans Med Imag 17(3):485–488
Freund Y, Shapire RE (1996) Experiments with a new boosting algorithm. In: Saitta L (ed) Proceedings of the thirteenth international conference on machine learning (ICML 1996). Morgan Kaufmann, Bari, pp 148–156
Friedrichs F, Igel C (2005) Evolutionary tuning of multiple SVM parameters. Neurocomputing 64(C):107–117
Gagné C, Schoenauer M, Sebag M, Tomassini M (2006) Genetic programming for kernel-based learning with co-evolving subsets selection. In: Runarsson TP, Beyer H-G, Burke EK, Merelo Guervós JJ, Whitley LD, Yao X (eds) Parallel problem solving from nature – PPSN IX. Lecture notes in computer science, vol 4193, pp 1008–1017. Springer
Gagné C, Sebag M, Schoenauer M, Tomassini M (2007) Ensemble learning for free with evolutionary algorithms? In: Lipson H (ed) Genetic and evolutionary computation conference (GECCO 2007). ACM, pp 1782–1789
Gathercole C, Ross P (1994) Dynamic training subset selection for supervised learning in genetic programming. In: Parallel problem solving from nature – PPSN III. Lecture notes in computer science, vol 866. Springer, pp 312–321
Gelly S, Teytaud O, Bredeche N, Schoenauer M (2006) Universal consistency and bloat in GP: some theoretical considerations about genetic programming from a statistical learning theory viewpoint. Revue d’Intell Artif 20(6):805–827
Gilad-Bachrach R, Navot A, Tishby N (2004) Margin based feature selection – theory and algorithms. In: Proceedings of the twenty-first international conference on machine learning (ICML 2009), Montreal. ACM Press, p 43
Han J, Kamber M (2000) Data mining: concepts and techniques. Morgan Kaufmann, New York
Hastie T, Rosset S, Tibshirani R, Zhu J (2004) The entire regularization path for the support vector machine. Adv Neural Inf Process Syst NIPS 17
Heidrich-Meisner V, Igel C (2009) Hoeffding and Bernstein races for selecting policies in evolutionary direct policy search. Proceedings of the twenty-sixth international conference on machine learning (ICML 2009), Montreal. ACM, pp 401–408
Hillis WD (1990) Co-evolving parasites improve simulated evolution as an optimization procedure. Phys D 42:228–234
Holland J (1986) Escaping brittleness: the possibilities of general purpose learning algorithms applied to parallel rule-based systems. In: Michalski RS, Carbonell JG, Mitchell TM (eds) Machine learning: an artificial intelligence approach, vol 2. Morgan Kaufmann, Los Altos, pp 593–623
Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor
Holte RC (1993) Very simple classification rules perform well on most commonly used datasets. Mach Learn 11:63–90
Monirul Islam M, Yao X (2008) Evolving artificial neural network ensembles. In: Fulcher J, Jain LC (eds) Computational intelligence: a compendium. Studies in computational intelligence, vol 115. Springer, pp 851–880
Joachims T (2005) A support vector method for multivariate performance measures. In: De Raedt L, Wrobel S (eds) Proceedings of the twenty-second international conference on machine learning (ICML 2009), Montreal. ACM international conference proceeding series, vol 119. ACM, pp 377–384
Jong K, Marchiori E, Sebag M (2004) Ensemble learning with evolutionary computation: application to feature ranking. In: Yao X et al (eds) Parallel problem solving from nature – PPSN VIII. Lecture notes in computer science, vol 3242. Springer, pp 1133–1142
Miikkulainen R, Stanley KO, Bryant BD (2003) Evolving adaptive neural networks with and without adaptive synapses. Evol Comput 4:2557–2564
Krawiec K, Bhanu B (2007) Visual learning by evolutionary and coevolutionary feature synthesis. IEEE Trans Evol Comput 11(5):635–650
Lippmann R, Haines JW, Fried DJ, Korba J, Das K (2000) Analysis and results of the 1999 DARPA on-line intrusion detection evaluation. In: Debar H, Mé L, Wu SF (eds) Recent advances in intrusion detection. Lecture notes in computer science, vol 1907. Springer, Berlin, pp 162–182
Liu Y, Yao X, Higuchi T (2000) Evolutionary ensembles with negative correlation learning. IEEE Trans Evol Comput 4(4):380–387
Llorà X, Sastry K, Goldberg DE, Gupta A, Lakshmi L (2005) Combating user fatigue in IGAS: partial ordering, support vector machines, and synthetic fitness. In: Beyer H-G, O’Reilly U-M (eds) Genetic and evolutionary computation conference (GECCO 05). ACM, New York, pp 1363–1370
Margineantu D, Dietterich TG (1997) Pruning adaptive boosting. In: Proceedings of the fourteenth international conference on machine learning (ICML 1996), Bari. Morgan Kaufmann, pp 211–218
Mierswa I (2006) Evolutionary learning with kernels: a generic solution for large margin problems. In: Cattolico M (ed) Genetic and evolutionary computation conference (GECCO 06). ACM, New York, pp 1553–1560
Mierswa I (2007) Controlling overfitting with multi-objective support vector machines. In: Lipson H (ed) Genetic and evolutionary computation conference (GECCO 07), Philadelphia, pp 1830–1837
Mozer MC, Dodier R, Colagrosso MC, Guerra-Salcedo C, Wolniewicz R (2001) Prodding the ROC curve: constrained optimization of classifier performance. Adv Neural Inf Process Syst NIPS. MIT Press
Platt J (1999) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B et al (eds) Advances in kernel methods – support vector learning. Morgan Kaufmann
Poli R (2008) Genetic programming theory. In: Ryan C, Keijzer M (eds) Genetic and evolutionary computation conference (GECCO 2008), Atlanta (Companion). ACM, pp 2559–2588
Rosset S (2004) Model selection via the AUC. In: Proceedings of the twenty-first international conference on machine learning (ICML 2009), Montreal. ACM international conference proceeding series, vol 69. ACM
Rumelhart DE, McClelland JL (1990) Parallel distributed processing. MIT Press, Cambridge
Schapire RE (1990) The strength of weak learnability. Mach Learn 5:197
Schoenauer M, Xanthakis S (1993) Constrained GA optimization. In: Forrest S (ed) Proceedings of the 5th international conference on genetic algorithms. Morgan Kaufmann, San Mateo, pp 573–580
Schölkopf B, Burges C, Smola A (1998) Advances in Kernel methods: support vector machines. MIT Press, Cambridge
Song D, Heywood MI, Nur Zincir-heywood A (2003) A linear genetic programming approach to intrusion detection. In: Proceedings of the genetic and evolutionary computation conference (GECCO). Lecture notes in computer science, vol 2724. Springer, Berlin/New York, pp 2325–2336
Song D, Heywood MI, Nur Zincir-Heywood A (2005) Training genetic programming on half a million patterns: an example from anomaly detection. IEEE Trans Evol Comput 9(3):225–239
Sonnenburg S, Franc V, Yom-Tov E, Sebag M (eds) (2008) Large scale machine learning challenge. In: ICML workshop, Helsinki
Sutton RS, Barto AG (1998) Reinforcement learning. MIT Press, Cambridge
Suttorp T, Igel C (2006) Multi-objective optimization of support vector machines. In: Jin Y (ed) Multi-objective machine learning. Studies in computational intelligence, vol 16. Springer, Berlin, pp 199–220
Tibshirani R (1996) Regression shrinkage and selection via the lasso. R Stat Soc B 58(1):267–288
Vapnik VN (1995) The nature of statistical learning. Springer, New York
Venturini G, Slimane M, Morin F, Asselin de Beauville JP (1997) On using interactive genetic algorithms for knowledge discovery in databases. In: Bäck Th (ed) International conference on genetic algorithms (ICGA). Morgan Kaufmann, pp 696–703
Zhang T (2003) Leave-one-out bounds for kernel methods. Neural Comput 15(6):1397–1437
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media New York
About this entry
Cite this entry
Sebag, M. (2017). Nonstandard Criteria in Evolutionary Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_599
Download citation
DOI: https://doi.org/10.1007/978-1-4899-7687-1_599
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-7685-7
Online ISBN: 978-1-4899-7687-1
eBook Packages: Computer ScienceReference Module Computer Science and Engineering