Decision Aiding
User acceptance of multi-criteria decision support systems: The impact of preference elicitation techniques

https://doi.org/10.1016/j.ejor.2004.05.031Get rights and content

Abstract

Previous research indicates that decision makers are often reluctant to use potentially beneficial multi-criteria decision support systems (MCDSS). Prior research has not examined the specific impact of preference elicitation techniques on user acceptance of MCDSS. The present research begins to fill this gap by examining the effect on users’ MCDSS evaluations of two commonly used preference elicitation techniques, absolute measurement and pairwise comparisons, while holding constant all other aspects of the MCDSS and decision making task. Experimental results (N = 153) indicate that users consider MCDSS with pairwise comparisons to be higher in decisional conflict, more effortful, less accurate, and overall less desirable to use than MCDSS with absolute measurements. Thus, any potential normative superiority of a preference elicitation technique must be balanced against its potentially adverse effects on user acceptance of the MCDSS within which it is employed. We present a research agenda for exploring the tradeoffs between objective validity and user acceptance in the design of decision analysis tools.

Introduction

Over several decades, significant technical advances have been made in the design of multi-criteria decision support systems (MCDSS) (see Siskos and Spyridakos (1999) for a comprehensive survey of the technical advances and Maxwell (2002) for a survey of commercial decision analysis software). There is conflicting evidence on how widely MCDSS are used in practice. Some argue that MCDSS are not readily accepted and used (Evans, 1984, Whyte and Latham, 1997, Limayem and DeSanctis, 2000), while others claim that users’ evaluations of MCDSS are generally quite favorable (Timmermans and Vlek, 1994). Despite these differences, observers concur that user evaluations vary from one specific MCDSS to another. Often mentioned is the observation that, given a choice, decision makers appear to prefer relatively unsophisticated MCDSS (Wallenius, 1975, Brockhoff, 1985, Buchanan and Daellenbach, 1987, Kottemann and Davis, 1991, Olson et al., 1995) and even prefer self-generated ad hoc solutions over those generated by MCDSS (Narasimhan and Vickery, 1988). These studies suggest that we do not have a complete understanding of what influences decision makers’ acceptance of MCDSS. A normatively superior MCDSS will only lead to improved decision making if the decision maker evaluates it favorably (on evaluative criteria linked to actual user adoption behavior) and is willing to use it. Dyer et al. (1992) call for researchers to bring behavioral and psychological insights to bear on the design of MCDSS. There has been very little investigation to date into how specific design features influence potential users’ evaluations on MCDSS. This significant gap in the current literature is the primary motivation for the present study.

The preference elicitation technique (e.g., absolute measurement, pairwise comparisons, ordinal judgments, rankings, matching, choice) employed within an MCDSS is a key element of that MCDSS. However, prior research has not examined the effect of the preference elicitation technique on users’ evaluations of MCDSS. Kottemann and Davis (1991) theorize that preference elicitation techniques embodied in MCDSS differ in the degree to which they require users to make explicit tradeoff judgments, which in turn cause a negative affective state in the decision maker called “decisional conflict”. Such decisional conflict is theorized to adversely influence users’ perceptions and preferences regarding the use of associated MCDSS. In this study, we will investigate the specific effects of the preference elicitation technique on decisional conflict and user acceptance criteria (i.e. perceived effort, perceived accuracy, and overall preference) for MCDSS.

To concretely illustrate our research question, consider two examples of leading commercial MCDSS software packages available today: Logical Decisions and Expert Choice. Expert Choice employs the analytical hierarchy process (AHP) (Saaty, 1980), while Logical Decisions supports AHP along with several other MCDA formalisms. Thus, it would be possible use AHP and pairwise comparisons with either MCDSS. As one of its alternate modes, Logical Decisions supports a direct absolute measurement method: after defining the attributes and alternatives for a decision situation, users assign preference values to a matrix formed by the intersection of the attributes and the alternatives, and then specify the weights for each attribute. Expert Choice uses pairwise comparisons among alternatives to elicit attribute preferences, giving the user the option of horizontal, vertical, or pie-chart shaped response scales and qualitative or quantitative response category anchors. Our examination of these two MCDSS reveals many striking differences in their user interfaces and functionality apart from the preference elicitation techniques employed. For example, Expert Choice gives the user feedback in the form of a consistency check and the option to perform sensitivity analysis on the results, whereas Logical Decisions is relatively unconstrained in that it allows the user to enter values falling outside of their own user-defined ranges. The two MCDSS require the user to navigate very different complex sets of menu options in order to accomplish a decision task. Thus, in addition to differences in the preference elicitation techniques employed, there are many differences between Logical Decisions and Expert Choice that could be responsible for any observed differences in users evaluations of the two MCDSS. Our research question concerns the specific effect of preference elicitation techniques on user evaluations of MCDSS, holding constant all of these various other kinds of differences from one MCDSS to another.

Our review of prior literature below will demonstrate that the effect of preference elicitation techniques on users’ evaluations of MCDSS has not been previously investigated. In particular, we will review two major groups of prior research:

  • (1)

    Research comparing different MCDSS on various criteria spanning user evaluations and normative properties. A given MCDSS consists of a bundle of design features, any of which may influence users’ evaluations of that system, including the preference elicitation technique, the appearance and format of the user interface, feedback and feedforward during the decision process, the format in which the user receives the system recommendations, the availability of sensitivity analysis, external information regarding the capabilities of the system, and others. Differences observed between two MCDSS might be due to any of several features that differ between them, but these studies were not designed to isolate the effects of specific features. Therefore the effects of the specific preference elicitation technique are confounded among all of the other features that differ between the MCDSS being compared.

  • (2)

    Research investigating the specific effects of preference elicitation techniques. Extensive prior research has addressed the impact of alternative preference elicitation techniques on normative properties of parameters elicited from decision makers and the resulting effects on objective decision outcomes such as consistency, reliability, and preference reversals. The research generally concludes that preference elicitation techniques requiring decision makers to make direct tradeoff judgments between attributes (e.g., pairwise comparisons) are normatively superior. However, prior research has overlooked the specific effects of preference elicitation techniques on user evaluations of the MCDSS that employ them. This is a serious omission, since any normatively superior features will fail to deliver realized improvements in decision making if users are not willing to use and rely on the advice of MCDSS.

Section snippets

Prior research comparing different MCDSS

There are several relevant studies of user perceptions of MCDSS in the literature. Olson et al. (1995) review many of these studies and present a new study that compares four MCDSS: DECAID is based on multi-attribute utility theory (MAUT) and involves direct absolute measurements; Logical Decision, also based on MAUT, uses indirect numerical tradeoffs to elicit weights and absolute numerical measurements to elicit scores; Expert Choice, based on the AHP involves pairwise comparisons; and

Theory and research hypotheses

This paper seeks to experimentally test the hypothesis of Kottemann and Davis (1991) which theorizes that a preference elicitation technique which requires the user to make explicit tradeoff judgments will result in the user experiencing a higher level of decisional conflict. Following Kottemann and Davis (1991), we use the term decisional conflict to refer to the negative affective state experienced by a decision maker as a result of making explicit tradeoff judgments among alternatives. There

Participants

The subjects for the experiment were 155 undergraduate students of business administration in four separate sessions. Two students were left out of the sample because they did not complete the procedure and their responses were not captured by the software, leaving a sample size of 153. The subjects were undergraduate juniors or seniors previously unexposed to multi-criteria methods. They were given a brief introduction to the two MCDM methods used in the study and shown an example of how these

Results

Since we used different scales for each item within each of the four constructs, all items relating to the four constructs––decisional conflict, perceived effort, perceived accuracy, and preference, were converted to a 0–100 scale. The items for decisional conflict, perceived effort, and perceived accuracy measured user self reports of these constructs for the two versions of the system A and B, on absolute scales. The items for preference measured relative preference for these versions, with

Conclusion

The results of this study have intriguing implications for the designers of MCDSS. Janis and Mann (1977) point out that a moderate degree of decisional stress is necessary to motivate a decision maker to fully evaluate alternatives and work out the best possible solution. Amason (1996) experimentally shows that cognitive conflict can improve strategic decision making quality, and thus this form of conflict is functional. Davis, 1989, Kottemann et al., 1994, and others show that subjective

Acknowledgement

We thank three anonymous referees for their helpful comments and suggestions that greatly improved the paper.

References (50)

  • A. Amason

    Distinguishing the effects of functional and dysfunctional conflict on strategic decision making: Resolving a paradox for top management teams

    Academy of Management Journal

    (1996)
  • V. Belton

    A comparison of the analytic hierarchy process and a simple multi-attribute value function

    European Journal of Operational Research

    (1986)
  • J.R. Bettman et al.

    A componential analysis of cognitive effort in choice

    Organizational Behavior and Human Decision Processes

    (1990)
  • J.R. Bettman et al.

    Correlation, conflict, and choice

    Journal of Experimental Psychology

    (1993)
  • K. Brockhoff

    Experimental test of MCDM algorithms in a modular approach

    European Journal of Operations Research

    (1985)
  • J.T. Buchanan et al.

    A comparative evaluation of interactive solution methods for multiple objective decision models

    European Journal of Operations Research

    (1987)
  • D.T. Campbell et al.

    Convergent and discriminant validation by the multitrait–multimethod matrix

    Psychological Bulletin

    (1959)
  • S. Chatterjee et al.

    Conflict and loss aversion in multiattribute choice: The effects of trade-off size and reference dependence on decision difficulty

    Organizational Behavior and Human Decision Processes

    (1996)
  • P.C. Chu et al.

    The joint effects of effort and quality on decision strategy choice with computerized decision aids

    Decision Sciences

    (2000)
  • P.C. Chu et al.

    Cross-cultural differences in choice behavior & use of decision aids: A comparison of Japan & the United States

    Organizational Behavior & Human Decision Processes

    (1999)
  • J.L. Corner et al.

    Capturing decision maker preference: Experimental comparison of decision analysis and MCDM techniques

    European Journal of Operational Research

    (1997)
  • F.D. Davis

    Perceived usefulness, perceived ease of use, and user acceptance of information technology

    MIS Quarterly

    (1989)
  • F.D. Davis et al.

    User acceptance of computer technology: A comparison of two theoretical models

    Management Science

    (1989)
  • P. Delquie

    Optimal conflict in preference assessment

    Management Science

    (2003)
  • R.F. De Vellis

    Scale Development: Theory and Applications

    (2003)
  • J.S. Dyer et al.

    Multiple criteria decision making, multiattribute utility theory; the next ten years

    Management Science

    (1992)
  • W. Edwards

    How to use multi-attribute utility measurements for social decision making?

    IEEE Transactions on Systems Management & Cybernetics

    (1977)
  • G.W. Evans

    An overview of techniques for solving multiobjective mathematical programs

    Management Science

    (1984)
  • G.W. Fischer et al.

    Strategy compatibility, scale compatibility, and the prominence effect

    Journal of Experimental Psychology: Human Perception & Performance

    (1993)
  • P.L. Gardner

    Scales and statistics

    Review of Educational Research

    (1975)
  • J.F. Hare et al.

    Multivariate Data Analysis

    (1998)
  • C.K. Hsee et al.

    Preference reversals between joint and separate evaluations of options: A review and theoretical analysis

    Psychological Bulletin

    (1999)
  • I.L. Janis et al.

    Decision Making: A Psychological Analysis of Conflict, Choice, & Commitment

    (1977)
  • E.J. Johnson et al.

    Effort and accuracy in choice

    Management Science

    (1985)
  • J.E. Kottemann et al.

    Decisional conflict and user acceptance of multi-criteria decision-making aids

    Decision Sciences

    (1991)
  • Cited by (44)

    • QUICKScan as a quick and participatory methodology for problem identification and scoping in policy processes

      2016, Environmental Science and Policy
      Citation Excerpt :

      In these phases, problems and stakeholders are identified, objectives are set and alternative options (i.e. scenarios, (spatial) strategies) defined. Scientific methods available in the exploratory phase are expert groups (European Commission, 2010), Rapid (Participatory) Appraisal (McCracken et al., 1988; Ison and Ampt, 1992), qualitative deliberative participatory methods (Davies and Dwyer, 2008), preference elicitation (Kodikara et al., 2010; Aloysius et al., 2006) or fuzzy cognitive mapping (Kosko, 1986; Jetter and Kok, 2014). These methods result in storylines, preference functions, score tables, or concept maps showing linkages and directions of influence between major problems, drivers, valuations and other concepts.

    View all citing articles on Scopus
    View full text