Avoid common mistakes on your manuscript.
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, and Erik Goodman’s book, “Machine Learning Assisted Evolutionary Multi- and Many-Objective Optimization” explores the complementary potential of evolutionary multi- and many-objective optimization algorithms (EMOAs) and machine learning. EMOAs are powerful tools for handling complex optimization problems with multiple conflicting objectives. However, EMOAs often suffer from limitations such as high computational cost, sensitivity to parameters, premature convergence, and difficulty in handling constraints and large-scale problems. Machine learning can address these limitations by accelerating evaluations, adapting parameters, and improving exploration and exploitation, leading to more efficient and effective optimization processes. Specifically, EMOAs generate valuable datasets of evolved high-performance solutions, for which Saxena et al. have reviewed a wide range of machine learning techniques, from linear regression, random forests, and artificial neural networks to support vector regression, to gain deeper insights into problem structure and solution optimality, and to develop innovative methods for improving convergence and diversity. In addition to synthesizing existing research, Saxena et al. also pioneer new directions by exploring the potential of machine learning to augment various evolutionary optimization algorithm components, stimulating future research endeavors.
Following the introduction, Chapter 2 establishes a foundation by introducing optimization problem types and algorithm classes, laying the groundwork for understanding the challenges and complexities involved in solving multi- and many-objective optimization problems (by many-objectives we mean four or more). Chapter 3 delves into the historical development of machine learning-assisted evolutionary optimization, focusing on techniques like objective reduction and “innovization”. The “innovization” term is used for the process that combines optimization and innovation. It involves first using an EMOA to generate a set of optimal solutions for a given problem. Then, through analysis of these solutions, new design principles are identified that are common to a subset or all of the optimal solutions. Chapter 3 also includes solved examples that illustrate the innovization process.
Chapter 4 explores learning to understand the optimization problem structure through objective reduction, identifying redundant objectives, and ranking essential ones. Chapter 4 uses an example to illustrate dimensionality reduction using Principal Component Analysis (PCA) and Maximum Variance Unfolding. Modified and improved versions of PCA and Maximum Variance Unfolding are also used to extract the preference structure and objective ranking. Chapter 4 concludes by showcasing these decision-support techniques with a demonstration of a sample problem and two real-world applications.
The core of the book lies in Chapters 5, 6, and 7, which introduce novel machine learning-based operators to enhance convergence, diversity, and overall performance of EMOAs, particularly within the framework of reference vector-based methods. Reference vector based EMOAs optimize multiple conflicting objectives by considering a set of reference vectors, each representing a specific preference or goal. The reference vector based algorithm aims to find optimal solutions that align with these reference vectors, resulting in a diverse set of solutions that balance different objectives. Chapter 5 presents the Innovized Progress 2 (IP2) operator that enhances convergence in reference vector based EMOAs by training a machine learning model on past solutions to predict improved offspring. These offspring replace a portion of the natural offspring, guiding the search towards better convergence without additional computational cost. Algorithms are presented for all steps of the operator and experimental results demonstrate the effectiveness of IP2 in improving convergence performance on challenging problems. Chapter 5 provides an example where Random Forest is used as a machine learning model in NSGA-III but any regression method is suitable, including Artificial Neural Networks, Support Vector Regression, or XGBoost.
Chapter 6 presents the IP3 operator that enhances diversity in reference vector based EMOAs by training multiple machine learning models on intra-generational solutions. These models are used to generate offspring that improve both the spread and uniformity of the solution set, without requiring additional function evaluations. Experimental results demonstrate the effectiveness of IP3 in improving diversity on challenging problems. Chapter 6 provides examples where k-Nearest Neighbors regressor models are used to improve NSGA-III.
Chapter 7 presents the Unified IP (UIP) operator that combines the strengths of the IP2 and IP3 operators to simultaneously improve convergence and diversity in reference vector based EMOAs. UIP adaptively invokes IP2 and IP3 based on the current needs of the optimization process. Experimental results demonstrate the effectiveness of UIP in enhancing the performance of various EMOA algorithms (such as NSGA-III, θ-DEA, MOEA/DD, and LHFiD) on a wide range of problems. The algorithmic details are also provided.
Chapter 8 evaluates the influence of various machine learning algorithms on the performance of the innovized operators (IP2, IP3, UIP). Eight different machine learning methods, including Linear Regression, Ridge Regression, Elastic Net Regression, Extra Trees Regressor, Random Forest, XGBoost, k-Nearest Neighbors, and Support Vector Regression, are compared to assess their suitability for IP2 and IP3.
Chapter 9 explores post-optimization analysis techniques and the use of machine learning models to learn patterns between objective vectors and their corresponding decision variables. This knowledge can be used to generate new solutions on the Pareto front without additional optimization effort. This technique is demonstrated using Deep Neural Networks and Gaussian Process Regression with NSGA-III on both synthetic and real-world problems, showing its potential for enhancing the diversity and distribution of solutions. By conditioning the machine learning models, this approach can provide decision-makers with a flexible tool for exploring the Pareto front.
The book concludes in Chapter 10 by emphasizing the potential of the emerging field of EMOAs and machine learning and outlining promising research directions such as machine learning assisted reference vector creation, initialization, normalization, termination, hyper-parameter learning, and combinatorial optimization.
Published at the forefront of research, the book provides a timely exploration of the intersection of machine learning and EMOAs. Its comprehensive coverage appeals to a broad audience, from those new to the field seeking a solid introduction to experienced researchers looking for the latest advancements and in-depth knowledge. The book’s clear explanations, algorithm presentations, and practical examples make complex concepts accessible to a wide range of readers.
While the book’s primary focus on reference vector based algorithms and many objective problems limits its scope, it effectively captures the state-of-the-art within these areas. Although it could be enhanced by delving deeper into areas beyond reference vector based algorithms an experienced researcher can easily modify the algorithms to work on other multi-objective evolutionary algorithms. The text is accessible to both EMO experts and non-AI specialists, providing clear explanations and practical examples. With its valuable insights and comprehensive reference list, the book serves as a solid foundation for researchers interested in the intersection of machine learning and evolutionary multi-objective optimization.
Machine Learning Assisted Evolutionary Multi- and Many- Objective Optimization is a valuable addition to the literature on machine learning-assisted evolutionary multi-objective optimization and its length (244 pages) is suitable for its comprehensive coverage. Overall, it is well-structured, and well-written, and provides a solid foundation for researchers interested in this emerging field. It will be a valuable resource for a wide range of individuals, including researchers specializing in genetic programming and evolvable machines, PhD students in artificial intelligence or machine learning, and university libraries.
Acknowledgements
This review was funded by the Scientific and Technological Research Council of Türkiye (TUBITAK) ARDEB 3501 Grant No: 222M440.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Selçuklu, S.B. “Machine learning assisted evolutionary multi- and many-objective optimization” by Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, and Erik D. Goodman, ISBN 978-981-99-2095-2, Springer, 2024. Genet Program Evolvable Mach 26, 10 (2025). https://doi.org/10.1007/s10710-025-09509-6
Published:
DOI: https://doi.org/10.1007/s10710-025-09509-6