Skip to main content

Pareto-Weighted-Sum-Tuning: Learning-to-Rank for Pareto Optimization Problems

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Data Science (LOD 2020)

Abstract

The weighted-sum method is a commonly used technique in Multi-objective optimization to represent different criteria considered in a decision-making and optimization problem. Weights are assigned to different criteria depending on the degree of importance. However, even if decision-makers have an intuitive sense of how important each criteria is, explicitly quantifying and hand-tuning these weights can be difficult. To address this problem, we propose the Pareto-Weighted-Sum-Tuning algorithm as an automated and systematic way of trading-off between different criteria in the weight-tuning process. Pareto-Weighted-Sum-Tuning is a configurable online-learning algorithm that uses sequential discrete choices by a decision-maker on sequential decisions, eliminating the need to score items or weights. We prove that utilizing our online-learning approach is computationally less expensive than batch-learning, where all the data is available in advance. Our experiments show that Pareto-Weighted-Sum-Tuning is able to achieve low relative error with different configurations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahmadian, S., Bhaskar, U., Sanità, L., Swamy, C.: Algorithms for inverse optimization problems. In: 26th Annual European Symposium on Algorithms, Dagstuhl Publishing, Germany (2018)

    Google Scholar 

  2. Beck, J., Chan, E., Irfanoglu, A., Papadimitriou, C.: Multi-criteria optimal structural design under uncertainty. Earthquake Eng. Struc. Dyn. 28(7), 741–761 (1999)

    Article  Google Scholar 

  3. Bertsimas, D., Demir, R.: An approximate dynamic programming approach to multidimensional knapsack problems. Manage. Sci. 48(4), 550–565 (2002)

    Article  MATH  Google Scholar 

  4. Bettinelli, A., Cacchiani, V., Malaguti, E.: A branch-and-bound algorithm for the knapsack problem with conflict graph. INFORMS J. Comput. 29(3), 457–473 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bruch, S.: An Alternative Cross Entropy Loss For Learning-To-Rank. arXiv.org (2020). https://arxiv.org/abs/1911.09798. Accessed 27 May 2020

  6. Bärmann, A., Martin, A., Pokutta, S., Schneider, O.: An Online-Learning Approach To Inverse Optimization (2020). arXiv.org. https://arxiv.org/abs/1810.12997. Accessed 27 May 2020

  7. Chang, K.: E-Design. Elsevier, Amsterdam (2016)

    Google Scholar 

  8. Dong, C., Chen, Y., Zeng, B.: Generalized inverse optimization through online learning. In: 32nd Conference on Neural Information Processing Systems (NeurIPS 2018). Montréal, Canada (2018)

    Google Scholar 

  9. Dyer, J.: Remarks on the analytic hierarchy process. Manage. Sci. 36(3), 249–258 (1990)

    Article  MathSciNet  Google Scholar 

  10. Engau, A., Sigler, D.: Pareto solutions in multicriteria optimization under uncertainty. Euro. J. Oper. Res. 281(2), 357–368 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  11. Gensch, D., Recker, W.: The multinomial, multiattribute logit choice model. J. Market. Res. 16(1), 124 (1979)

    Article  Google Scholar 

  12. Ghobadi, K., Lee, T., Mahmoudzadeh, H., Terekhov, D.: Robust inverse optimization. Oper. Res. Lett. 46(3), 339–344 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  13. Heuberger, C.: Inverse combinatorial optimization: a survey on problems, methods, and results. J. Combinatorial Optimiz. 8(3), 329–361 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  14. Joachims, T.: Optimizing search engines using clickthrough data. In: KDD 2002: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 133–142 (2002)

    Google Scholar 

  15. Liang, X., Zhu, L., Huang, D.: Multi-task ranking SVM for image cosegmentation. Neurocomputing 247, 126–136 (2017)

    Article  Google Scholar 

  16. Marler, R., Arora, J.: The weighted sum method for multi-objective optimization: new insights. Struc. Multidisciplinary Optimizat. 41(6), 853–862 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  17. Muzzioli, S., Torricelli, C.: The pricing of options on an interval binomial tree. an application to the DAX-index option market. Euro. J. Oper. Res. 163(1), 192–200 (2005)

    Google Scholar 

  18. Negahban, S., Oh, S., Thekumparampil, K., Xu, J.: Learning from comparisons and choices. J. Mach. Learn. Res. 19(1–95) (2018)

    Google Scholar 

  19. Shvimer, Y., Herbon, A.: Comparative empirical study of binomial call-option pricing methods using S and P 500 index data. North Am. J. Econ. Finance 51, 101071 (2020)

    Article  Google Scholar 

  20. Wang, H.: Pareto-Weighted-Sum-Tuning (2020). https://github.com/harryw1248/Pareto-Weighted-Sum-Tuning

  21. Yang, X.: Nature-Inspired Optimization Algorithms. [Place of publication not identified]: Elsevier (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Harry Wang or Brian T. Denton .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, H., Denton, B.T. (2020). Pareto-Weighted-Sum-Tuning: Learning-to-Rank for Pareto Optimization Problems. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2020. Lecture Notes in Computer Science(), vol 12566. Springer, Cham. https://doi.org/10.1007/978-3-030-64580-9_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-64580-9_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-64579-3

  • Online ISBN: 978-3-030-64580-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics