A multiobjective optimization solver using rank-niche evolution strategy

https://doi.org/10.1016/j.advengsoft.2006.01.004Get rights and content

Abstract

A rank-niche evolution strategy (RNES) algorithm has been developed in this paper to solve unconstrained multiobjective optimization problems. A required number of Pareto-optimal solutions can be generated by the algorithm in a single run. In addition to the operations of recombination, mutation and selection used in original evolution strategy (ES), an external elite set which contains a given number of non-dominated elites is updated and trimmed by a clustering technique to maintain a uniformly distributed Pareto front. The fitness function for each individual contains the information of rank and crowding status. The selection operation using this fitness function considers the superiority and distribution simultaneously. Eight test problems illustrated in other papers are used to test RNES. For some test problems the Pareto-optimal solutions obtained by RNES are better than those obtained by GA-based algorithms.

Introduction

In engineering and other fields the objective of optimization sometimes is more than one. As a result the need for multiobjective optimization is obvious. The major difference between the single objective optimization and the multiobjective optimization is that the optimum solution for multiobjective optimization is not unique. There exists finite or infinite number of optimum solutions for multiobjective optimization problems. These solutions form the set of Pareto-optimal solutions. The solutions in the set are equally important. No one is better than any other one in all objectives. The final solution will be taken from one of these solutions through a due decision making process.

Due to the nature of multiobjective optimization, the capability of generating a large amount of uniformly spaced Pareto-optimal solutions becomes the major concern for all solvers. There are some methods using traditional mathematical programming to solve multiobjective optimization problems [1]. Some of them are briefly reviewed as follows: The first one is the constraint method. This method selects one of the objectives as the single objective to be optimized and the other objectives are treated as constraints with upper or lower limits. The difficulty of this method is that it is not easy to determine proper upper or lower limits for constraints to guarantee a solution. The second one is the weighting method. To apply this method the objective functions are combined linearly to form a single objective. The Pareto-optimal solutions are found by changing the weightings for objectives. The drawback of this method is that the solutions thus found are incomplete if the objective space is concave. The third one is the min–max approach. The method normalizes each objective first. Each normalized objective is then multiplied by a weighting parameter and treated as a constraint subjected to a common variable upper limit. The optimization process is to minimize the common upper limit to yield the optimum solution. The fourth one is the compromise programming method. This method also normalizes each objective first and then combines all objectives non-linearly to form a single objective. After forming a single objective the traditional single objective optimization solver is used to search for the optimum solution.

In addition to those methods mentioned in the previous paragraph, the fuzzy decision making process is also frequently used to find the solutions for multiobjective optimization problems [2]. This approach defines the membership functions for all objectives and constraints. The optimum solution is obtained through maximizing the minimum membership function value.

The way to generate Pareto-optimal solutions for these methods is through changing the weighting parameters or membership functions for all objectives. Every Pareto-optimal solution is obtained by solving one single objective optimization problem with a specific set of weighting parameters or membership functions. Since one Pareto-optimal solution is obtained by solving one transformed single objective optimization problem, the efficiency of finding many Pareto-optimal solutions is relatively low. Another drawback associated with these traditional methods is the non-uniform distribution of Pareto-optimal solutions. Although weighting parameters and membership functions can be used to lead the solutions toward the desired area, it is almost impossible to determine the appropriate parameter values to generate uniformly spaced Pareto-optimal solutions.

In recent years various evolution computation methods have been developed and widely used in many fields. The genetic algorithm (GA) is the most popular one in all evolution algorithms. The genetic algorithm was developed by Holland [3] in 1962 and became well known in 1980s. The first step in GA is to encode the design variables using binary string or other codes. To simulate the natural evolution process three major operations including selection, crossover and mutation are performed to generate a new generation. The selection operation is responsible for picking out the better individuals in a generation to prepare for subsequent crossover operation. The crossover operation plays the role to exchange genetic materials between two arbitrarily chosen parents to generate new individuals. The mutation operation randomly changes the genetic materials with a very low probability. This operation supplements crossover operation to increase the probability of generating better individuals. These three operations are performed sequentially and repeatedly until the best individual is generated or the maximum number of generation is reached.

There have been many research reports about using genetic algorithms to find Pareto-optimal solutions. Fonseca and Fleming [4] grouped them into three categories based on the ideas used. The first one is called aggregating approach. That is to combine all objectives into a single one using various weighting methods. The genetic algorithm designed for single objective optimization is then used repeatedly to find Pareto-optimal solutions. The second one is called population-based non-Pareto approach. This type of approach is to select better individuals for each objective sequentially. The selected individuals are put together and then subjected to crossover and mutation operations. The third one is called Pareto-based approach. This approach selects individuals based on the definition of Pareto domination. The selected ones are subjected to crossover and mutation operations. Syswerda [5], Jakob et al. [6] and Jones et al. [7] employed weighting method to solve multiobjective optimization problems. Weinke et al. [8] and Gembicki [9] used goal vector to find the Pareto-optimal solutions. The advantage of aggregating approach lies in its easy implementation. The disadvantage is it is hard to find complete Pareto-optimal solutions for non-convex problems. The generation of different Pareto-optimal solutions is achieved by changing relative weightings for objectives. But relying on proper choice of weightings to yield evenly spaced Pareto-optimal solutions is very difficult or even impossible. The second category of research was initiated by Schaffer [10]. His method was named VEGA. The selection scheme was designed to use each objective in term. Fourman [11] and Kursawe [12] also used similar approach to find Pareto-optimal solutions. Since the method selects individual only based on the rank of the individual in a specific objective, it is much easier than the selecting process in the third category. The method can also be used to find Pareto-optimal solutions for non-convex problems. The disadvantage of the method is the distribution of Pareto-optimal solutions is usually non-uniform and biased toward some objectives. Goldberg [13], Fonseca [14], Srinivas and Deb [15] and Horn et al. [16] used Pareto-based approach to solve multiobjective optimization problems. This kind of methods artificially assigns higher fitness values for non-dominated individuals. Niche and sharing technique are employed simultaneously to adjust the fitness values in order to distribute individuals uniformly in the objective space. The advantage of this type of method is its ability to control the distribution of the solutions during the selecting process. The disadvantage is that the domination check spends a lot of computational time.

The evolution strategy (ES) was introduced by Rechenberg [17]. It is also an evolution algorithm similar to GA. The optimum solution is obtained through operations of recombination, mutation and selection. The detailed process will be introduced in the later session. The evolution strategy does not need to encode and decode design variables, hence it is easier to be implemented than the earlier version of GA. Because the evolution strategy has shown good performance in single objective optimization and is rarely used in multiobjective optimization, it is therefore the interest of this paper to develop an appropriate selection scheme incorporated with evolution strategy to find Pareto-optimal solutions for multiobjective optimization problems. The Pareto-optimal solutions obtained by ES will be compared with those obtained by some GA methods.

Section snippets

Evolution strategy

The evolution strategy is composed of three operations to generate a new generation [17], [18]. The first operation is the recombination which is similar to the crossover operation in GA. Two parent individuals are selected randomly to exchange genetic materials to yield a new individual. This process is repeated until a certain number of new individuals is produced. The second operation is the mutation. The mutation operation changes design variables of each individual using normally

Numerical examples

Two groups of test problems are used to test RNES. The first group of problems are taken from Fonseca and Fleming [23], Kursawe [12], and Viennet et al.’s [24] papers. Table 1 shows the nature of these problems. The Pareto-optimal solutions obtained by RNES are plotted. To shorten the length of the paper only visual comparisons of solutions with original papers are made. No quantitative comparisons using previously mentioned indexes are done for this group of problems. The second group of

Conclusions

The majority of papers dealing with multiobjective optimization using evolutionary computation are GA-based methods. The evolution strategy which is another well-known evolutionary computation algorithm is rarely found to solve multiobjective optimization problems. Hence this paper tries to modify and extend original ES to solve multiobjective optimization problems. The algorithm developed is named RNES. The original ES steps remain unchanged in RNES. The merit of this approach is that the

Acknowledgement

This research was supported by the National Science Council of the Republic of China under grant number NSC92-2212-E-005-019.

References (24)

  • D. Wienke et al.

    Multicriteria target vector optimization of analytical procedures using a genetic algorithm. Part I. Theory, numerical simulations and application to atomic emission spectroscopy

    Chim Acta

    (1992)
  • M. Save et al.
    (1990)
  • M. Yu et al.

    Multi-objective fuzzy optimization of structures based on generalized fuzzy decision-making

    Comput Struct

    (1994)
  • J.H. Holland

    Outline for a logical theory of adaptive systems

    J Assoc Comput Mach

    (1962)
  • C.M. Fonseca et al.

    An overview of evolutionary algorithms in multiobjective optimization

    Evol Comput

    (1995)
  • Syswerda GP. The application of genetic algorithms to resource scheduling. In: Proceedings of the fourth international...
  • W. Jakob et al.

    Application of genetic algorithms to task planning and learning

  • Jones G, Brown RD, Clark DE, Willet P, Glen RC. Searching databases of two-dimensional and three-dimensional chemical...
  • Gembicki FW. Vector optimization for control with performance and parameter sensitivity indices. PhD thesis, Case...
  • Schaffer JD. Multiple objective optimization with vector evaluated genetic algorithms. In: Proceedings of the first...
  • Fourman MP. Compaction of symbolic layout using genetic algorithms. In: Proceedings of the first international...
  • F. Kursawe

    A variant of evolution strategies for vector optimization

  • Cited by (15)

    • Solving a bi-objective flowshop scheduling problem by a Multi-objective Immune System and comparing with SPEA2+ and SPGA

      2011, Advances in Engineering Software
      Citation Excerpt :

      Recent development in evolutionary multi-objective optimization provides interesting results as discussed by Deb et al. [16] and Zitzler et al. [17]. In addition, some sub-population-like approaches can be found in the related literatures, such as multi-objective simulated-annealing algorithm [1,18], multi-objective scatter search [19], genetic algorithms [20,21], multi-sexual genetic algorithm [22], TSP-GA multi-objective algorithm [23], ant colony optimization [9], multi population genetic algorithm [24], niched Pareto genetic algorithm [25], hierarchical fair competition model [26], micro-genetic algorithm [27], multi-objective particle swarm optimization [28,29], multi-objective tabu search [30], Rank-Niche Evolution Strategy (RNES) algorithm [31], and two phases sub-population GA [32]. Therefore, different EMO algorithms are proposed, and the efficiency and solution quality are greatly improved.

    View all citing articles on Scopus
    View full text