Skip to main content

Comparison of Analogy-Based Methods for Predicting Preferences

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11940))

Abstract

Given a set of preferences between items taken by pairs and described in terms of nominal or numerical attribute values, the problem considered is to predict the preference between the items of a new pair. The paper proposes and compares two approaches based on analogical proportions, which are statements of the form “a is to b as c is to d”. The first one uses triples of pairs of items for which preferences are known and which make analogical proportions, altogether with the new pair. These proportions express attribute by attribute that the change of values between the items of the first two pairs is the same as between the last two pairs. This provides a basis for predicting the preference associated with the fourth pair, also making sure that no contradictory trade-offs are created. Moreover, we also consider the option that one of the pairs in the triples is taken as a k-nearest neighbor of the new pair. The second approach exploits pairs of compared items one by one: for predicting the preference between two items, one looks for another pair of items for which the preference is known such that, attribute by attribute, the change between the elements of the first pair is the same as between the elements of the second pair. As discussed in the paper, the two approaches agree with the postulates underlying weighted averages and more general multiple criteria aggregation models. The paper proposes new algorithms for implementing these methods. The reported experiments, both on real data sets and on generated datasets suggest the effectiveness of the approaches. We also compare with predictions given by weighted sums compatible with the data, and obtained by linear programming.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bounhas, M., Pirlot, M., Prade, H.: Predicting preferences by means of analogical proportions. In: Cox, M.T., Funk, P., Begum, S. (eds.) ICCBR 2018. LNCS (LNAI), vol. 11156, pp. 515–531. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01081-2_34

    Chapter  Google Scholar 

  2. Bounhas, M., Prade, H.: An analogical interpolation method for enlarging a training dataset. In: BenAmor, N., Theobald, M. (eds.) Proceedings of 13th International Conference on Scalable Uncertainty Management (SUM 2019), Compiègne, 16–18 December. LNCS, Springer (2019)

    Google Scholar 

  3. Bounhas, M., Prade, H., Richard, G.: Analogy-based classifiers for nominal or numerical data. Int. J. Approx. Reason. 91, 36–55 (2017)

    Article  MathSciNet  Google Scholar 

  4. Chen, S., Joachims, T.: Predicting matchups and preferences in context. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge. Discovery and Data Mining (KDD 2016), pp. 775–784. ACM (2016)

    Google Scholar 

  5. Cohen, W.W., Schapire, R.E., Singer, Y.: Learning to order things. CoRR abs/1105.5464 (2011). http://arxiv.org/abs/1105.5464

  6. Dubois, D., Prade, H., Richard, G.: Multiple-valued extensions of analogical proportions. Fuzzy Sets Syst. 292, 193–202 (2016)

    Article  MathSciNet  Google Scholar 

  7. Fahandar, M.A., Hüllermeier, E.: Learning to rank based on analogical reasoning. In: Proceedings of 32nd National Conference on Artificial Intelligence (AAAI 2018), New Orleans, 2–7 February 2018 (2018)

    Google Scholar 

  8. Fürnkranz, J., Hüllermeier, E. (eds.): Preference Learning. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14125-6

    Book  MATH  Google Scholar 

  9. Hüllermeier, E., Fürnkranz, J.: Editorial: preference learning and ranking. Mach. Learn. 93(2–3), 185–189 (2013)

    Article  MathSciNet  Google Scholar 

  10. Miclet, L., Bayoudh, S., Delhay, A.: Analogical dissimilarity: definition, algorithms and two experiments in machine learning. JAIR 32, 793–824 (2008)

    Article  MathSciNet  Google Scholar 

  11. Pirlot, M., Prade, H., Richard, G.: Completing preferences by means of analogical proportions. In: Torra, V., Narukawa, Y., Navarro-Arribas, G., Yañez, C. (eds.) MDAI 2016. LNCS (LNAI), vol. 9880, pp. 135–147. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45656-0_12

    Chapter  Google Scholar 

  12. Prade, H., Richard, G.: Homogeneous logical proportions: their uniqueness and their role in similarity-based prediction. In: Brewka, G., Eiter, T., McIlraith, S.A. (eds.) Proceedings of 13th International Conference on Principles of Knowledge Representation and Reasoning (KR 2012), Roma, 10–14 June, pp. 402–412. AAAI Press (2012)

    Google Scholar 

  13. Prade, H., Richard, G.: From analogical proportion to logical proportions. Log. Univers. 7(4), 441–505 (2013)

    Article  MathSciNet  Google Scholar 

  14. Prade, H., Richard, G.: Analogical proportions: from equality to inequality. Int. J. Approx. Reason. 101, 234–254 (2018)

    Article  MathSciNet  Google Scholar 

  15. Tversky, A.: Intransitivity of preferences. Psychol. Rev. 76, 31–48 (1969)

    Article  Google Scholar 

Download references

Acknowledgements

This work was partially supported by ANR-11-LABX-0040-CIMI (Centre Inter. de Math. et d’Informatique) within the program ANR-11-IDEX-0002-02, project ISIPA.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Myriam Bounhas .

Editor information

Editors and Affiliations

Appendices

A Tversky’s Additive Difference Model

Tversky’s additive difference model functions used in the experiments are given below. Let \(d^1, d^2\) be a pair of alternative that have to be compared. We denote by \(\eta _i\) the difference between \(d^1\) and \(d^2\) on the criterion i, i.e. \(\eta _i = d^1_i - d^2_i\). For the TV dataset in which 3 features are involved, we used the following piecewise linear functions:

$$\begin{aligned} \varPhi _1(\eta _1)&= {\left\{ \begin{array}{ll} {{\,\mathrm{sgn}\,}}(\eta _1) \ 0.453 \cdot 0.143 \cdot \eta _1 &{} \text {if } |\eta _1| \in [0, 0.25],\\ {{\,\mathrm{sgn}\,}}(\eta _1) \ 0.453 \cdot [-0.168 + 0.815 \cdot \eta _1] &{} \text {if } |\eta _1| \in [0.25, 0.5],\\ {{\,\mathrm{sgn}\,}}(\eta _1) \ 0.453 \cdot [0.230 + 0.018 \cdot \eta _1] &{} \text {if } |\eta _1| \in [0.5, 0.75],\\ {{\,\mathrm{sgn}\,}}(\eta _1) \ 0.453 \cdot [-2.024 + 3.024 \cdot \eta _1] &{} \text {if } |\eta _1| \in [0.75, 1], \end{array}\right. }\\ \varPhi _2(\eta _2)&= {\left\{ \begin{array}{ll} {{\,\mathrm{sgn}\,}}(\eta _2) \ 0.053 \cdot 2.648 \cdot \eta _2 &{} \text {if } |\eta _2| \in [0, 0.25],\\ {{\,\mathrm{sgn}\,}}(\eta _2) \ 0.053 \cdot [0.371 + 1.163 \cdot \eta _2] &{} \text {if } |\eta _2| \in [0.25, 0.5],\\ {{\,\mathrm{sgn}\,}}(\eta _2) \ 0.053 \cdot [0.926 + 0.054 \cdot \eta _2] &{} \text {if } |\eta _2| \in [0.5, 0.75],\\ {{\,\mathrm{sgn}\,}}(\eta _2) \ 0.053 \cdot [0.866 + 0.134 \cdot \eta _2] &{} \text {if } |\eta _2| \in [0.75, 1], \end{array}\right. }\\ \varPhi _3(\eta _3)&= {\left\{ \begin{array}{ll} {{\,\mathrm{sgn}\,}}(\eta _3) \ 0.494 \cdot 0.289 \cdot \eta _3 &{} \text {if } |\eta _3| \in [0, 0.25],\\ {{\,\mathrm{sgn}\,}}(\eta _3) \ 0.494 \cdot [-0.197 + 1.076 \cdot \eta _3] &{} \text {if } |\eta _3| \in [0.25, 0.5],\\ {{\,\mathrm{sgn}\,}}(\eta _3) \ 0.494 \cdot [0.150 + 0.383 \cdot \eta _3] &{} \text {if } |\eta _3| \in [0.5, 0.75],\\ {{\,\mathrm{sgn}\,}}(\eta _3) \ 0.494 \cdot [-1.252 + 2.252 \cdot \eta _3] &{} \text {if } |\eta _3| \in [0.75, 1]. \end{array}\right. } \end{aligned}$$

For the TV dataset in which 5 features are involved, we used the following piecewise linear functions:

$$\begin{aligned} \varPhi _1(\eta _1)&= {\left\{ \begin{array}{ll} {{\,\mathrm{sgn}\,}}(\eta _1) \cdot 0.294 \cdot 2.510 \cdot \eta _1 &{} \text {if } |\eta _1| \in [0, 0.25],\\ {{\,\mathrm{sgn}\,}}(\eta _1) \cdot 0.294 \cdot [0.562 + 0.263 \cdot \eta _1] &{} \text {if } |\eta _1| \in [0.25, 0.5],\\ {{\,\mathrm{sgn}\,}}(\eta _1) \cdot 0.294 \cdot [0.645 + 0.096 \cdot \eta _1] &{} \text {if } |\eta _1| \in [0.5, 0.75],\\ {{\,\mathrm{sgn}\,}}(\eta _1) \cdot 0.294 \cdot [-0.130 + 1.130 \cdot \eta _1] &{} \text {if } |\eta _1| \in [0.75, 1], \end{array}\right. }\\ \varPhi _2(\eta _2)&= {\left\{ \begin{array}{ll} {{\,\mathrm{sgn}\,}}(\eta _2) \cdot 0.151 \cdot 0.125 \cdot \eta _2 &{} \text {if } |\eta _2| \in [0, 0.25],\\ {{\,\mathrm{sgn}\,}}(\eta _2) \cdot 0.151 \cdot [0.025 + 0.023 \cdot \eta _2] &{} \text {if } |\eta _2| \in [0.25, 0.5],\\ {{\,\mathrm{sgn}\,}}(\eta _2) \cdot 0.151 \cdot [-0.545 + 1.164 \cdot \eta _2] &{} \text {if } |\eta _2| \in [0.5, 0.75],\\ {{\,\mathrm{sgn}\,}}(\eta _2) \cdot 0.151 \cdot [-1.689 + 2.689 \cdot \eta _2] &{} \text {if } |\eta _2| \in [0.75, 1], \end{array}\right. }\\ \varPhi _3(\eta _3)&= {\left\{ \begin{array}{ll} {{\,\mathrm{sgn}\,}}(\eta _3) \cdot 0.039 \cdot 2.388 \cdot \eta _3 &{} \text {if } |\eta _3| \in [0, 0.25],\\ {{\,\mathrm{sgn}\,}}(\eta _3) \cdot 0.039 \cdot [0.582 + 0.057 \cdot \eta _3] &{} \text {if } |\eta _3| \in [0.25, 0.5],\\ {{\,\mathrm{sgn}\,}}(\eta _3) \cdot 0.039 \cdot [-0.046 + 1.314 \cdot \eta _3] &{} \text {if } |\eta _3| \in [0.5, 0.75],\\ {{\,\mathrm{sgn}\,}}(\eta _3) \cdot 0.039 \cdot [ 0.759 + 0.241 \cdot \eta _3] &{} \text {if } |\eta _3| \in [0.75, 1], \end{array}\right. }\\ \varPhi _1(\eta _4)&= {\left\{ \begin{array}{ll} {{\,\mathrm{sgn}\,}}(\eta _4) \cdot 0.425 \cdot 0.014 \cdot \eta _4 &{} \text {if } |\eta _4| \in [0, 0.25],\\ {{\,\mathrm{sgn}\,}}(\eta _4) \cdot 0.425 \cdot [-0.110 + 0.455 \cdot \eta _4] &{} \text {if } |\eta _4| \in [0.25, 0.5],\\ {{\,\mathrm{sgn}\,}}(\eta _4) \cdot 0.425 \cdot [-0.341 + 0.917 \cdot \eta _4] &{} \text {if } |\eta _4| \in [0.5, 0.75],\\ {{\,\mathrm{sgn}\,}}(\eta _4) \cdot 0.425 \cdot [-1.613 + 2.613 \cdot \eta _4] &{} \text {if } |\eta _4| \in [0.75, 1]. \end{array}\right. }\\ \varPhi _1(\eta _5)&= {\left\{ \begin{array}{ll} {{\,\mathrm{sgn}\,}}(\eta _5) \cdot 0.091 \cdot 3.307 \cdot \eta _5 &{} \text {if } |\eta _5| \in [0, 0.25],\\ {{\,\mathrm{sgn}\,}}(\eta _5) \cdot 0.091 \cdot [0.697 + 0.519 \cdot \eta _5] &{} \text {if } |\eta _5| \in [0.25, 0.5],\\ {{\,\mathrm{sgn}\,}}(\eta _5) \cdot 0.091 \cdot [0.880 + 0.153 \cdot \eta _5] &{} \text {if } |\eta _5| \in [0.5, 0.75],\\ {{\,\mathrm{sgn}\,}}(\eta _5) \cdot 0.091 \cdot [0.979 + 0.021 \cdot \eta _5] &{} \text {if } |\eta _5| \in [0.75, 1]. \end{array}\right. } \end{aligned}$$

B Linear Program Used for Computing a Weighted Sum

We compared the performances of the algorithms presented in this paper to the results obtained with a linear program inferring the parameters of a weighted sum that fits as well as possible with the learning set. The linear program is given below:

$$\begin{aligned} \begin{array}{rlr} \min \sum _{a \in E} &{} \delta _a\\ \sum _{i = 1}^n w_i \cdot (a^1_i - a^2_i) + \delta _{a} &{} \ge 0 &{} \forall a \in E: a^1 \succcurlyeq a^2\\ \sum _{i = 1}^n w_i \cdot (a^1_i - a^2_i) - \delta _{a} &{} \le \epsilon &{} \forall a \in E: a^1 \prec a^2\\ w_i &{} \in [0,1] &{} i = {1, ..., n}\\ \delta _{a} &{} \in [0, \infty [ \end{array} \end{aligned}$$

with:

  • n: number of features,

  • E: learning set composed of pairs (\(a_1, a_2\)) evaluated on n features and a preference relation for each pair (\(a_1 \succ a_2\) or \(a_1 \prec a_2\)),

  • \(w_i\): weight associated to feature i,

  • \(\epsilon \): a small positive value.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bounhas, M., Pirlot, M., Prade, H., Sobrie, O. (2019). Comparison of Analogy-Based Methods for Predicting Preferences. In: Ben Amor, N., Quost, B., Theobald, M. (eds) Scalable Uncertainty Management. SUM 2019. Lecture Notes in Computer Science(), vol 11940. Springer, Cham. https://doi.org/10.1007/978-3-030-35514-2_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-35514-2_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-35513-5

  • Online ISBN: 978-3-030-35514-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics