Decision Support
Interactive evolutionary multi-objective optimization for quasi-concave preference functions

https://doi.org/10.1016/j.ejor.2010.02.027Get rights and content

Abstract

We present a new hybrid approach to interactive evolutionary multi-objective optimization that uses a partial preference order to act as the fitness function in a customized genetic algorithm. We periodically send solutions to the decision maker (DM) for her evaluation and use the resulting preference information to form preference cones consisting of inferior solutions. The cones allow us to implicitly rank solutions that the DM has not considered. This technique avoids assuming an exact form for the preference function, but does assume that the preference function is quasi-concave. This paper describes the genetic algorithm and demonstrates its performance on the multi-objective knapsack problem.

Introduction

The field of combinatorial optimization, with its variety of NP-hard problems, has turned to heuristics to provide nearly optimal solutions to previously intractable problems. Frequently, however, a DM is faced with a combinatorial optimization problem with several different objectives. These multi-objective combinatorial optimization (MOCO) problems are even more difficult to solve optimally since they involve both NP-hard problems and multiple objectives.

While it is sometimes possible to create a single objective problem by combining the objective functions into a single preference function, the way a DM combines conflicting objectives is often difficult to capture. In many cases, the DM cannot quantify how the objectives should be combined into a preference function that can then be optimized using single objective optimization techniques. This complication is often severe enough to make a priori approaches impractical. For a more complete discussion of the advantages and shortcomings of these approaches, the interested reader is referred to Dyer et al., 1992, Wallenius et al., 2008.

An alternative approach is to generate the set of Pareto optimal solutions (or approximate Pareto optimal solutions), and present them to the DM. The DM then selects a solution a posteriori. Unfortunately, this set is difficult to visualize for problems having three or more objectives, and is very time consuming to generate since it may be comprised of thousands of solutions, even in the case of linear constraints and objectives (Kondakci et al., 1996).

Interactive decision making addresses many of the challenges associated with a priori and a posteriori approaches. It combines the process of obtaining information from the DM with the process of generating solutions to the problem. Seminal papers in the area include Geoffrion et al., 1972, Zionts and Wallenius, 1976. Since these approaches generate solutions that the DM must evaluate in order to guide the progression of the algorithm, research in this area is concerned with both the generation and the representation of solutions. There are software packages that perform both activities. See Caballero et al., 2002, Poles et al., 2006 for a comprehensive list of software descriptions. For more information on generating solutions and representing them to a DM, the interested reader is referred to Miettinen (1999).

Evolutionary optimization and computing has emerged as a new field with strong ties to Multiple Criteria Decision Making/Multiattribute Utility Theory (Deb, 2001). The first evolutionary multi-objective optimization algorithm is due to Schaffer (1984). However, it was not until about 10 years later that three working evolutionary algorithms were suggested almost at the same time: MOGA by Fonseca and Fleming (1993), NSGA by Srinivas and Deb (1994), and NPGA by Horn et al. (1995). The main thrust in all these algorithms was to generate an approximation of the Pareto optimal frontier. An interesting paper is Fonseca and Fleming (1998), who demonstrate the need for preference articulation in cases where many objectives lead to a non-dominated set too large to effectively sample. A survey of methods developed before 2000 attempting to handle user’s preferences is provided by Coello Coello (2000). More recent attempts to incorporate user’s preferences into a multi-objective evolutionary framework include Cvetkovic and Parmee, 2002, Phelps and Köksalan, 2003, Branke and Deb, 2004, Deb et al., 2005, Hanne, 2005, Kamalian et al., 2004, Molina et al., 2009, Parmee et al., 2001. Cvetkovic and Parmee (2002) assign weights for the objectives and additionally require a minimum level for dominance. NSGA-II (Deb et al., 2002) elaborates on two methods; the guided dominance principle, and biased crowding distance to incorporate vague user preferences. The method in Deb et al. (2005) is based on the idea of using reference direction projections as part of the fitness function. Hanne (2005) discusses interactive decision support based on evolutionary principles. Parmee et al., 2001, Kamalian et al., 2004 discuss interactive evolutionary systems for multi-objective design. Molina et al. (2009) develop a reference point-based method where they favor solutions that dominate a reference point or that are dominated by this reference point over all other solutions. Phelps and Köksalan (2003) develop and demonstrate an interactive genetic algorithm on multi-objective knapsack and minimum spanning tree problems. Finally, Köksalan and Phelps (2007) developed a multi-objective evolutionary algorithm to concentrate on a desired part of the efficient frontier using partial information on the preferences of the DM.

To the best of our knowledge, Phelps and Köksalan (2003)’s and our approach are the only approaches that guarantee correct partial orders of the populations provided that the DM’s preferences are consistent with the assumed utility function forms. Phelps and Köksalan (2003) use a linear utility function and make corrections to the partial order whenever the DM’s expressed preferences are not consistent with such a function. We assume a more general quasi-concave function that is considered to represent human preferences well. To our knowledge, this is the only evolutionary algorithm that incorporates the properties of an implicit quasi-concave utility function into the algorithm. Utilizing the theory developed for quasi-concave functions we guarantee to produce partial orders that are consistent with preferences derived from such functions.

Our research uses a similar interactive genetic algorithm framework to that of Phelps and Köksalan (2003), and we use a similar experimental framework to test our algorithm. They estimate and use a linear utility function to order the population. They make corrections to this ordering when the DM’s expressed preferences are in conflict with this order. However, our technique is more general since it is designed to support any quasi-concave function of the objectives. Incorporating DM preferences in the metaheuristic via DM interaction guides the search to the most preferred region of the solution space. This helps avoid both the computational burden of generating the full efficient frontier and the problems inherent in attempting to quantify the DM’s preference function. More specifically, we propose an evolutionary metaheuristic that evaluates (i.e., partially rank orders) solutions using the convex preference cones developed in Korhonen et al. (1984). These cones are generated based on pairwise comparisons of solutions, making it possible to preference order solutions that the DM has never directly evaluated and provide a stronger ordering than previous dominance-based techniques. The preference order is used to evaluate the fitness of the population members. As in Phelps and Köksalan (2003), we assume that the individual objectives are known, and that the user’s preference function is unknown.

The results of our method are evaluated in a manner similar to that of Phelps and Köksalan (2003). We compare our results to the best solution found by the genetic algorithm per the DM’s true preferences to assess the ability of our algorithm to correctly select the DM’s most preferred solution. Due to the difficulty of accurately solving large problems to optimality, we present a comparison of our result to the optimal linear programming (LP) relaxation of our problem instances. We report the average value of these metrics over several runs of each tested configuration of the algorithm.

In the following sections, we detail the development of the evolutionary metaheuristic and present computational results for the multi-objective 0–1 knapsack problem (MOKP). Without loss of generality, we assume that each objective is to be maximized.

Section snippets

Steps of the evolutionary metaheuristic

While all genetic algorithms share certain common features (Michalewicz, 1996), they must be adapted to the specific problem at hand in order to provide good solutions efficiently (Phelps and Köksalan, 2003). In this case, the algorithm must generate the population of solutions, obtain preference information from the DM, and partially order the population members, based on the preference information in order to find which members of the population to select for breeding and which to replace.

Experimentation

Several design and implementation issues arose while developing this genetic algorithm. Like most genetic algorithms, we needed to decide on the best population size, the number of generations to run, and several other similar issues. We addressed this parameter tuning problem using an experimental design framework.

For all experiments, the DM was replaced with a ‘function robot’ that would evaluate the sample of solutions sent the DM and return the best and the worst of the sample based on

Conclusions and future work

We have developed and tested a new evolutionary approach to interactive multi-objective optimization by showing how convex preference cones can be used to sort solutions to multi-objective combinatorial optimization problems. In turn, this guided search reduces the efficient solutions to the region of the solution space most preferred by the DM. This guided search can also reduce the number of efficient solutions that must be generated for the DM to evaluate. Applying this technique requires

References (31)

  • Ravindra Ahuja et al.

    A greedy genetic algorithm for the quadratic assignment problem

    Computers and Operations Research

    (2000)
  • J. Molina et al.

    G-dominance: Reference point based dominance for multiobjective metaheuristics

    European Journal of Operational Research

    (2009)
  • I.C. Parmee et al.

    Introducing prototype interactive evolutionary systems for ill-defined multi-objective design environments

    Advances in Engineering Software

    (2001)
  • Branke, J., Deb, Kalyanmoy, 2004. Integrating user preferences into evolutionary multi-objective optimization. KanGal...
  • R. Caballero et al.

    Promoin: An interactive system for multiobjective programming

    International Journal of Information Technology and Decision Making

    (2002)
  • Coello Coello, Carlos A., 2000. Handling preferences in evolutionary multiobjective optimization: A survey. In:...
  • Dragan Cvetkovic et al.

    Preferences and their application in evolutionary multiobjective optimization

    IEEE Transactions on Evolutionary Computation

    (2002)
  • Kalyanmoy Deb

    Multi-Objective Optimization Using Evolutionary Algorithms

    (2001)
  • Kalyanmoy Deb et al.

    A fast and elitist multiobjective genetic algorithm: Nsga-ii

    IEEE Transactions on Evolutionary Computation

    (2002)
  • Deb, Kalyanmoy, Sundar, J., Uday, B.R.N., 2005. Reference point based multi-objective optimization using evolutionary...
  • James Dyer et al.

    Multiple criteria decision making, multiattribute utility theory: The next ten years

    Management Science

    (1992)
  • Fonseca, C.M., Fleming, P.J., 1993. Genetic algorithms for multiobjective optimization: Formulation, discussion, and...
  • C.M. Fonseca et al.

    Multiobjective optimization and multiple constraint handling with evolutionary algorithms. Part II: Application example

    IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans

    (1998)
  • A.M. Geoffrion et al.

    An interactive approach for multicriterion optimization, with an application to the operation of an academic department

    Management Science

    (1972)
  • T. Hanne

    Interactive decision support based on multiobjective evolutionary algorithms

    Operations Research Proceedings (GOR)

    (2005)
  • Cited by (66)

    • A simple model for mixing intuition and analysis

      2022, European Journal of Operational Research
    • An interactive method for multi-criteria dispatching problems with unknown preference functions

      2020, Computers and Industrial Engineering
      Citation Excerpt :

      A feasible solution is called a Pareto optimal solution if it is impossible to improve on one of the objectives without making at least one other objective worse (Pinedo, 2012). Generating the Pareto optimal solution set to DMs is an alternative approach and DMs subsequently select the most preferred solution from the set (Fowler et al., 2010). This research proposed an ICM for the MCDM in dispatching problems with nonlinear preference functions.

    View all citing articles on Scopus
    View full text