Decision Support
Interactive algorithms for a broad underlying family of preference functions

https://doi.org/10.1016/j.ejor.2017.07.028Get rights and content

Highlights

  • Introduces a distance function to represent the preferences of the decision maker.

  • First study that reduces the solution space based on distance-based functions.

  • The associated theory for reducing the solution space is developed.

  • Interactive algorithms to find the best point of the decision maker are developed.

  • The algorithms quickly converge to the most preferred point of the decision maker.

Abstract

In multi-criteria decision making approaches it is typical to consider an underlying preference function that is assumed to represent the decision maker’s preferences. In this paper we introduce a broad family of preference functions that can represent a wide variety of preference structures. We develop the necessary theory and interactive algorithms for both the general family of the preference functions and for its special cases. The algorithms guarantee to find the most preferred solution (point) of the decision maker under the assumed conditions. The convergence of the algorithms are achieved by progressively reducing the solution space based on the preference information obtained from the decision maker and the properties of the assumed underlying preference functions. We first demonstrate the algorithms on a simple bi-criteria problem with a given set of available points. We also test the performances of the algorithms on three-criteria knapsack problems and show that they work well.

Introduction

Multi-objective problems are characterized by the presence of multiple, generally conflicting, objectives and can be formulated as``Minimize"f(x)=(f1(x),,fp(x))T,subjecttoxX,where x is the decision vector, X is the feasible decision space, fj is the jth objective function, and p is the total number of objectives. The quotation marks in (1) indicate that the minimization of a vector is not a well-defined mathematical operation.

Let Z be the image of the feasible decision space, X, in the objective space. An objective vector z=(z1,,zp)TZ is said to be dominated if and only if there exists z^Z such that z^jzj for j=1,,p and z^j<zj for at least one j. If there does not exist such a z^, then z is said to be nondominated. A dominated point, z, is said to be strictly dominated by z^ if and only if z^j<zj for j=1,,p. The decision vector corresponding to a dominated (nondominated) point is called an inefficient (efficient) solution.

In solving multi-objective problems we need to obtain preference information from a decision maker (DM) in order to differentiate between nondominated solutions. There are different types of approaches that obtain preference information at different phases of the decision making process. In evaluating discrete solutions that are defined by multiple objectives there are two main approaches in terms of gathering preference information. First type estimates a preference function that supposedly represents the DM’s preferences at the start and convert the problem into a single-objective problem (see for example Jacquet-Lagreze, Siskos, 1982, Keeney, Raiffa, 1976, Roy, 1971, Saaty, 1980). This approach may be sensitive to the degree to which the estimated preference function represents the DM’s preferences. Interactive approaches, on the other hand, gather preference information progressively and converge towards preferred solutions iteratively. They aim at reaching the preferred solutions of the DM by keeping the amount of required preference information low (see for example Branke, Corrente, Greco, Slowinski, Zielniewicz, 2016, Korhonen, Wallenius, Zionts, 1984, Köksalan, Karwan, Zionts, 1984).

Typically, the DM is presented a small set of solutions and is asked to provide preference information between them. Using the available preference information, new solutions that may be of more interest to the DM are searched for. The approach alternates between preference gathering and searching phases until it converges to highly preferred solutions. Evaluating the solutions progressively helps the DM to discover the available solutions and facilitates learning about his/her own preferences.

In this paper, we deal with the so-called choice problem where there are many available solutions each defined by multiple objectives. The task is to find a highly preferred solution of the DM.

Interactive approaches developed for the choice problem typically assume that the DM has a certain preference structure that can be represented by an underlying preference function. They do not attempt to estimate the preference function but use its properties in converging towards the preferred solutions. Different studies assume different types of underlying preference functions (see for example Zionts, 1981 (linear); Greco et al., 2008 (additive); Korhonen et al., 1984 (quasiconcave); Bozkurt et al., 2010 (Tchebycheff); Köksalan and Sagala, 1995a, (general monotone)). In these apporaches, it is typical to reduce the objective/solution space by progressively eliminating those parts that are known not to contain the most preferred solution of the DM. For example, Korhonen et al. (1984) use convex cones to eliminate inferior solutions under the assumption of a nondecreasing quasiconcave preference function.

There is a tradeoff between the required preference information and the generality of the assumed preference function. In the search for the most preferred solution, the more restrictive the form of the preference function is, the less is the required preference information we expect from the DM. That is, if the DM’s preferences are known to be consistent with a linear preference function, then one can use any approach that allows for this preference structure. However, the approach developed for a linear preference function is the most efficient one as it directly exploits the linear structure. The approaches developed for quasiconcave and general monotone preference functions also cover a linear preference function as a special case, but they utilize weaker properties in order to stay applicable for their more general preference-function structures. These observations are empirically observed by Köksalan (1984) and Köksalan and Sagala (1995a). Köksalan and Sagala (1995b) develop methods to estimate the form of the DM’s preference function and they suggest using the approach that corresponds to the most restrictive form that is suitable among linear, quasiconcave, quasiconvex and general monotone.

It has been argued that the data involving real people indicate that maximizing quasiconcave preference functions represent human behavior well (see for example Silberberg and Suen, 2001, p. 260-261; Nicholson and Snyder, 2008, p. 50) and such functions have been extensively used in the literature (see for example Korhonen et al., 1984 and Ulu & Köksalan, 2014) when objectives are of maximization type. When the objectives are of minimization type, then the same arguments are true for minimizing quasiconvex preference functions.

In this paper, we develop algorithms that are compatible with a broad family of preference functions. The algorithms are general in the sense that they can cover the whole family and can be made as restrictive as needed with the uncovered information regarding the parameters of the preference function. More specifically, we assume that underlying weighted Lα preference functions represent the DM’s preferences. This is a special family of nondecreasing preference functions that can capture a wide variety of preference structures and can be considered as an approximation of quasiconvex family of preference functions. We use the DM’s preferences in conjunction with the properties of these functions to eliminate the solution space that cannot contain the most preferred solution. We develop the theory to characterize such inferior regions. This theory, depending on the structure of the DM’s preferences, allows us to reduce large portions of the solution space and converge to the most preferred solution quickly.

We develop two interactive algorithms that guarantee finding the most preferred solution of a DM whose preferences are consistent or can be approximated by a weighted Lα preference function. In both algorithms, we assume that the weights are not known. The difference between the algorithms is based on whether we have prior knowledge of the value of α or not. If we do, the corresponding algorithm uses the properties of this specific preference function. For the case we do not know the exact value of α, we develop an algorithm that allows for all possible α values at the start and progressively restricts it based on the preference information obtained from the interactions with the DM. To the best of our knowledge, this is the first study that uses the family of weighted Lα functions to represent the preferences of the DM and exploits its properties to converge to the preferred solutions quickly.

In Section 2 we review the background information and develop the necessary theory for reducing the solution space under weighted Lα preference functions. We develop the algorithms in Section 3 and demonstrate them on a numerical example in Section 4. We test the performances of the algorithms on various three-criteria knapsack problems in Section 5. We make concluding remarks and discuss future research directions in Section 6.

Section snippets

Some theory

In this section, we first provide background information on search space reduction (SSR) under quasiconvex preference functions and then develop the necessary theory for more powerful SSR under weighted Lα preference functions. In order to facilitate easy reading, we provide the details of all of our proofs in the appendix.

Korhonen et al. (1984) reduce the objective space using convex cones based on the preferences of the DM under the assumption that he/she has a nondecreasing quasiconcave

Algorithms

In this section we develop two algorithms. We assume that the DM’s preferences are consistent with an Lαw function. The problem is to find the most preferred point of the DM among a finite set of available points, zi ∈ Z. The first algorithm (α-Known) addresses the case where we know the true value of α at the outset. We relax this assumption in the second algorithm (α-Unknown) and address the case where the α value corresponding to the DM’s preference function is unknown to us. In both

A numerical example

We demonstrate both α-Known and α-Unknown algorithms on a simple bi-criteria example. We simulate the responses of the DM assuming that his/her preferences are consistent with a weighted Euclidean distance function (L2w) where w1=0.45 and w2=0.55. Assume that the available points are as given in Table 1. We also provide the corresponding preference function values in Table 1. We label the set of steps from 1 to 5 in each algorithm as an iteration.

We present the progress of α-Known algorithm in

Computational experiments

We test the performances of the algorithms on 50 and 100-item, three-criteria knapsack problem instances solved by Lokman et al. (2016). We take the nondominated points of each instance they solve and threat those points as a discrete set of nondominated points. To represent the underlying preference functions, we use five different α values: α=2,3,4,8, and fifteen different weight vectors:w1=(0.90,0.05,0.05)T,w2=(0.05,0.90,0.05)T,w3=(0.05,0.05,0.90)T,w4=(0.33,0.33,0.33)T,w5=(0.80,0.10,0.10)T,w

Conclusions

In this paper, we develop two interactive algorithms that guarantee finding the most preferred point of a DM whose preferences are consistent with an Lαw function. The algorithms utilize the properties of the Lαw functions and the available preferences of the DM to reduce the search space and converge to the most preferred point. The two algorithms differ based on whether or not we have prior knowledge of the value of α. The α-Known algorithm uses the properties of the corresponding preference

References (23)

  • M.M. Köksalan et al.

    Interactive approaches for discrete alternative multiple criteria decision making with monotone utility functions

    Management Science

    (1995)
  • Cited by (11)

    • Evaluating solutions and solution sets under multiple objectives

      2021, European Journal of Operational Research
    • Preference-based cone contraction algorithms for interactive evolutionary multiple objective optimization

      2020, Swarm and Evolutionary Computation
      Citation Excerpt :

      Such holistic information is used to construct preference cones with the aim of converging a population of solutions towards a region of the PF being highly preferred to the DM. For this purpose, we use a procedure incorporating either all compatible ASFs [33] or distance-based Lα-norms [39]. The cones defined using these two preference models are employed to indicate, respectively, the desirable and non-relevant regions in the objective space.

    • A New Interactive Algorithm for Continuous Multiple Criteria Problems: A Portfolio Optimization Example

      2021, International Journal of Information Technology and Decision Making
    View all citing articles on Scopus
    View full text