Skip to main content
Log in

Multiobjective BFGS method for optimization on Riemannian manifolds

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

This paper introduces a Riemannian BFGS method to address multiobjective optimization problems with strongly retraction-convex objective functions. This method is a natural extension of the Euclidean version and produces a sequence of iterates that converges to a Pareto optimal point regardless of the initial point. The main component of our globalization strategy is a generalized Wolfe line search. Numerical experiments demonstrate the superiority and effectiveness of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1

Similar content being viewed by others

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  1. Absil, P.A., Mahony, R., Sepulchre, R.: Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton (2008)

    Book  Google Scholar 

  2. Assunção, P.B., Ferreira, O.P., Prudente, L.F.: Conditional gradient method for multiobjective optimization. Comput. Optim. Appl. 78(3), 741–768 (2021)

    MathSciNet  Google Scholar 

  3. Bello Cruz, J.Y., Lucambio Pérez, L.R., Melo, J.G.: Convergence of the projected gradient method for quasiconvex multiobjective optimization. Nonlinear Anal. Theory Methods Appl. 74(16), 5268–5273 (2011)

    MathSciNet  Google Scholar 

  4. Bento, G.C., Ferreira, O.P., Oliveira, P.R.: Unconstrained steepest descent method for multicriteria optimization on Riemannian manifolds. J. Optim. Theory Appl. 154(1), 88–107 (2012)

    MathSciNet  Google Scholar 

  5. Bento, G.C., Neto, J.C.: A subgradient method for multiobjective optimization on Riemannian manifolds. J. Optim. Theory Appl. 159(1), 125–137 (2013)

    MathSciNet  Google Scholar 

  6. Bento, G.C., Neto, J.C., Meireles, L.V.: Proximal point method for locally Lipschitz functions in multiobjective optimization of Hadamard manifolds. J. Optim. Theory Appl. 179(1), 37–52 (2018)

    MathSciNet  Google Scholar 

  7. Bento, G.C., Neto, J.C., Santos, P.: An inexact steepest descent method for multicriteria optimization on Riemannian manifolds. J. Optim. Theory Appl. 159, 108–124 (2013)

    MathSciNet  Google Scholar 

  8. Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM J. Optim. 15, 953–970 (2005)

    MathSciNet  Google Scholar 

  9. Boumal, N.: An Introduction to Optimization on Smooth Manifolds. Cambridge University Press, Cambridge (2023)

    Google Scholar 

  10. Cai, T., Song, L., Li, G., Liao, M.: Multi-task learning with Riemannian optimization. In: ICIC 2021: Intelligent Computing Theories and Application, volume 12837 of Lecture Notes in Computer Science, pp. 499–509 (2021)

  11. Carrizo, G.A., Lotito, P.A., Maciel, M.C.: Trust-region globalization strategy for the nonconvex unconstrained multiobjective optimization problem. Math. Program. 159, 339–369 (2016)

    MathSciNet  Google Scholar 

  12. Carrizosa, E., Frenk, J.B.G.: Dominating sets for convex functions with some applications. J. Optim. Theory Appl. 96(2), 281–295 (1998)

    MathSciNet  Google Scholar 

  13. Chuong, T.D.: Newton-like methods for efficient solutions in vector optimization. Comput. Optim. Appl. 54(3), 495–516 (2013)

    MathSciNet  Google Scholar 

  14. Cocchi, G., Liuzzi, G., Lucidi, S., Sciandrone, M.: On the convergence of steepest descent methods for multiobjective optimization. Comput. Optim. Appl. 77(1), 1–27 (2020)

    MathSciNet  Google Scholar 

  15. Das, I., Dennis, J.E.: A closer look at drawbacks of minimizing weighted sums of objectives for Pareto set generation in multicriteria optimization problems. Struct. Optim. 14, 63–69 (1997)

    Google Scholar 

  16. Das, I., Dennis, J.E.: Normal-boundary intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM J. Optim. 8(3), 631–657 (1998)

    MathSciNet  Google Scholar 

  17. Deb, K.: Multiobjective Optimization using Evolutionary Algorithms. Wiley, New York (2001)

    Google Scholar 

  18. Eschenauer, H., Koski, J., Osyczka, A.: Multicriteria Design Optimization. Springer, Berlin (1990)

    Google Scholar 

  19. Eslami, N., Najafi, B., Vaezpour, S.M.: A trust-region method for solving multicriteria optimization problems on riemannian manifolds. J. Optim. Theory Appl. 196(1), 212–239 (2023)

    MathSciNet  Google Scholar 

  20. Evans, G.W.: An overview of techniques for solving multiobjective mathematical programs. Manage. Sci. 30(11), 1268–1282 (1984)

    MathSciNet  Google Scholar 

  21. Ferreira, O.P., Louzeiro, M.S., Prudente, L.F.: Iteration-complexity and asymptotic analysis of steepest descent method for multiobjective optimization on Riemannian manifolds. J. Optim. Theory Appl. 184(2), 507–533 (2020)

    MathSciNet  Google Scholar 

  22. Fliege, J.: OLAF–A general modeling system to evaluate and optimize the location of an air polluting facility. OR Spektrum 23, 117–136 (2001)

    Google Scholar 

  23. Fliege, J., Graña Drummond, L.M., Svaiter, B.F.: Newton’s method for multiobjective optimization. SIAM J. Optim. 20(2), 602–626 (2009)

    MathSciNet  Google Scholar 

  24. Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51(3), 479–494 (2000)

    MathSciNet  Google Scholar 

  25. Fliege, J., Vicente, L.N.: Multicriteria approach to bilevel optimization. J. Optim. Theory Appl. 131(2), 209–225 (2006)

    MathSciNet  Google Scholar 

  26. Fukuda, E.H., Graña Drummond, L.M.: On the convergence of the projected gradient method for vector optimization. Optimization 60(8–9), 1009–1021 (2011)

    MathSciNet  Google Scholar 

  27. Fukuda, E.H., Graña Drummond, L.M.: Inexact projected gradient method for vector optimization. Comput. Optim. Appl. 54(3), 473–493 (2013)

    MathSciNet  Google Scholar 

  28. Gass, S., Saaty, T.: The computational algorithm for the parametric objective function. Naval Res. Logist. Q. 2(1–2), 39–45 (1955)

    MathSciNet  Google Scholar 

  29. Geoffrion, A.M.: Proper efficiency and the theory of vector maximization. J. Math. Anal. Appl. 22(3), 618–630 (1968)

    MathSciNet  Google Scholar 

  30. Gonçalves, M.L., Lima, F.S., Prudente, L.F.: Globally convergent Newton-type methods for multiobjective optimization. Comput. Optim. Appl. 83(2), 403–434 (2022)

    MathSciNet  Google Scholar 

  31. Gonçalves, M.L., Prudente, L.F.: On the extension of the Hager-Zhang conjugate gradient method for vector optimization. Comput. Optim. Appl. 76(3), 889–916 (2020)

    MathSciNet  Google Scholar 

  32. Graña Drummond, L.M., Iusem, A.: A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28, 5–29 (2004)

    MathSciNet  Google Scholar 

  33. Graña Drummond, L.M., Raupp, F.M.P., Svaiter, B.F.: A quadratically convergent Newton method for vector optimization. Optimization 63(5), 661–677 (2014)

    MathSciNet  Google Scholar 

  34. Hiriart-Urruty, J.B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms I: Fundamentals. Springer, Berlin (1993)

    Google Scholar 

  35. Huang, W., Absil, P.A., Gallivan, K.A.: A Riemannian symmetric rank-one trust-region method. Math. Program. 150(2), 179–216 (2015)

    MathSciNet  Google Scholar 

  36. Huang, W., Absil, P.A., Gallivan, K.A.: A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. SIAM J. Optim. 28(1), 470–495 (2018)

    MathSciNet  Google Scholar 

  37. Huang, W., Gallivan, K.A., Absil, P.A.: A Broyden class of quasi-Newton methods for Riemannian optimization. SIAM J. Optim. 25(3), 1660–1685 (2015)

    MathSciNet  Google Scholar 

  38. Lu, F., Chen, C.R.: Newton-like methods for solving vector optimization problems. Appl. Anal. 93(8), 1567–1586 (2014)

    MathSciNet  Google Scholar 

  39. Lucambio Pérez, L.R., Prudente, L.F.: Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 28(3), 2690–2720 (2018)

    MathSciNet  Google Scholar 

  40. Lucambio Pérez, L.R., Prudente, L.F.: A Wolfe line search algorithm for vector optimization. ACM Trans. Math. Softw. 45(4), 1–23 (2019)

    MathSciNet  Google Scholar 

  41. Najafi, S., Hajarian, M.: Multiobjective conjugate gradient methods on Riemannian manifolds. J. Optim. Theory Appl. 197(3), 1229–1248 (2023)

    MathSciNet  Google Scholar 

  42. Neto, J.C., Da Silva, G.J., Ferreira, O.P., Lopes, J.O.: A subgradient method for multiobjective optimization. Comput. Optim. Appl. 54(3), 461–472 (2013)

    MathSciNet  Google Scholar 

  43. Picheny, V., Wagner, T., Ginsbourger, D.: A benchmark of kriging-based infill criteria for noisy optimization. Struct. Multidiscip. Optim. 48(3), 607–626 (2013)

    Google Scholar 

  44. Prudente, L.F., Souza, D.R.: A quasi-Newton method with Wolfe line searches for multiobjective optimization. J. Optim. Theory Appl. 194(3), 1107–1140 (2022)

    MathSciNet  Google Scholar 

  45. Rosenbrock, H.H.: An automatic method for finding the greatest or least value of a function. Comput. J. 3(3), 175–184 (1960)

    MathSciNet  Google Scholar 

  46. Tanabe, H., Fukuda, E.H., Yamashita, N.: Proximal gradient methods for multiobjective optimization and their applications. Comput. Optim. Appl. 72(2), 339–361 (2019)

    MathSciNet  Google Scholar 

  47. Udriste, C.: Convex Functions and Optimization Methods on Riemannian Manifolds. Mathematics and its Applications, vol. 297. Springer, Dordrecht (1994)

    Google Scholar 

  48. Wang, X.M., Wang, J.H., Li, C.: Convergence of inexact steepest descent algorithm for multiobjective optimizations on Riemannian manifolds without curvature constraints. J. Optim. Theory Appl. (2023). https://doi.org/10.1007/s10957-023-02235-y

    Article  MathSciNet  Google Scholar 

  49. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: Empirical results. Evol. Comput. 8(2), 173–195 (2000)

    CAS  PubMed  Google Scholar 

Download references

Acknowledgements

We would like to express our appreciation to the editor and anonymous referees for providing valuable feedback that improved the quality of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masoud Hajarian.

Ethics declarations

Conflict of interest

The authors have no conflict of interest to declare.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Najafi, S., Hajarian, M. Multiobjective BFGS method for optimization on Riemannian manifolds. Comput Optim Appl 87, 337–354 (2024). https://doi.org/10.1007/s10589-023-00522-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-023-00522-y

Keywords