Democratic approximation of lexicographic preference models

https://doi.org/10.1016/j.artint.2010.11.012Get rights and content
Under an Elsevier user license
open archive

Abstract

Lexicographic preference models (LPMs) are an intuitive representation that corresponds to many real-world preferences exhibited by human decision makers. Previous algorithms for learning LPMs produce a “best guess” LPM that is consistent with the observations. Our approach is more democratic: we do not commit to a single LPM. Instead, we approximate the target using the votes of a collection of consistent LPMs. We present two variations of this method—variable voting and model voting—and empirically show that these democratic algorithms outperform the existing methods. Versions of these democratic algorithms are presented in both the case where the preferred values of attributes are known and the case where they are unknown. We also introduce an intuitive yet powerful form of background knowledge to prune some of the possible LPMs. We demonstrate how this background knowledge can be incorporated into variable and model voting and show that doing so improves performance significantly, especially when the number of observations is small.

Keywords

Lexicographic models
Preference learning
Bayesian methods

Cited by (0)