ABSTRACT

Optimization aims at finding optimal solution(s) from all feasible solutions, where an optimal solution represents the extremum with respect to a certain objective. This chapter introduces an evolving approach for generating benchmark testing problems. It also introduces a systematic method for constructing performance-comparison-based benchmark problems, namely the hierarchical-fitness-based evolving benchmark generator (HFEBG). The chapter describes the HFEBG framework, together with two variants, namely HFEBG-U and HFEBG-H. It utilises U-test and H-test in different hierarchical-fitness assignment methods. Testing optimization algorithms on both real-world problems and benchmark problems would give a performance measure. Performance comparison of optimization algorithms is studied in terms of unique difficulty: specifically, uniquely easy and uniquely difficult problems. A criticism has been expressed in the field, namely, many proposed novel optimization algorithms actually contribute little, since they are not compared with the winners of competitions. Sequential learnable evolutionary algorithm provides an algorithm-selection framework for solving black-box continuous design-optimization problems.