Abstract
In this paper, the combined gradient methods are designed to solve multiobjective optimization problems. According to the special structure of the problem, we only use the gradient information of each objective function and combine each gradient by combining parameters to obtain the search direction of the problem. The Hessian matrix of each objective function is avoided in the methods. Under the assumption that the gradient of objective function is linearly independent, we prove that the methods can always produce a subsequence that converges to the local Pareto point of the problem, and analysis its worst-case iteration complexity. The numerical results are reported for showing the effectiveness of the algorithm.

























Similar content being viewed by others
References
Bagchi, U.: Simultaneous minimization of mean and variation of flow time and waiting time in single machine systems. Oper. Res. 37, 118–125 (1989)
Numer, I.B.I.T., Bai, Z.Z., Duff, I.S., Wathen, A.J.: A class of incomplete orthogonal factorization methods. Methods Theor. Math. 41, 53–70 (2001)
Bai, Z.Z., Yin, J.F.: Modified incomplete orthogonal factorization methods using givens rotations. Computing 86, 53–69 (2009)
Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nolinear Programming Theory and Algorithms. Wiley, New York, Chichester, Brisbane, Toronto, Singapore (1993)
Bento, G.C., Allende, G.B., Pereira, Y.R.L.: A Newton-like method for variable order vector optimization problems. J. Optim. Theory Appl. 177, 201–221 (2018)
Bento, G.C., Neto, J.X.C., Lopez, G., Soubeyran, A., Souza, J.C.O.: The proximal point method for locally lipschtz functions in multiobjective optimization with application to the compromise problem. SIAM J. OPTIM. 28(2), 1104–1120 (2018)
Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM J. Optim. 15, 953–970 (2005)
Burachik, R.S., Kaya, C.Y., Rizvi, M.M.: A new scalarization technique and new algorithms to generate pareto fronts. SIAM J. Optim. 27(2), 1010–1034 (2017)
Chen, Z., Huang, X.X., Yang, X.Q.: Generalized proximal point algorithms for multiobjective optimization problems. Appl. Anal. 90, 935–949 (2011)
Cruz, J.Y.B.: A subgradient method foe vector optimization problems. SIAM J. Optim. 23(4), 2169–2182 (2013)
Das, I., Dennis, J.E.: Normal-boundary intersection: a new method for generating pareto optimal points in nonlinear multicriteria optimization problems. SIAM J. Optim. 8, 631–657 (1998)
Dolan, E.D., Moré, I.J.: Benchmarking optimization software with performance profiles. Math. Programm. 91, 201–312 (2002)
Drummond, L.M.G., Iusem, A.N.: A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28, 5–29 (2004)
Drummond, L.M.G., Raupp, F.M.P., Svaiter, B.F.: A quadratically convergent Newton method for vector optimization. Optimization 63, 661–677 (2014)
Drummond, L.M.G., Svaiter, B.F.: A steepest descent method for vector optimization. J. Comput. Appl. Math. 175, 395–414 (2005)
Eschenauer, H., Koski, J., Osyczka, A.: Multicriteria Design Optimization. Springer, Berlin (1990)
Eichfelder, G.: Adaptive Scalarization Methods in Multiobjective Optimization. Springer-Verlag, Berlin, Heidelberg (2008)
Fliege, J., Vaz, A.I.F.: A method for constrained multiobjective optimization based on SQP techniques. SIAM J. Optim. 26(4), 2091–2119 (2016)
Fliege, J., Drummond, L.M.G., Svaiter, B.F.: Newtons method for multiobjective optimization. SIAM J. Optim. 20, 602–626 (2009)
Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51, 479–494 (2000)
Grandoni, F., Krysta, P., Leonardi, S., Ventre, C.: Utilitarian mechanism design for multiobjective optimization. SIAM J. Optim. 43(4), 1263–1290 (2014)
Jin, Y., Olhofer, M., Sendhoff, B.: Dynamic weighted aggregation for evolutionary multiobjective optimization: Why does it work and how?, In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1042–1049 (2001)
Kim, I.Y., de Weck, O.L.: Adaptive weighted sum method for bi-objective optimization: Pareto fron generation. Struct. Multidiscip. Optim. 29, 149–158 (2005)
Leschine, T.M., Wallenius, H., Verdini, W.A.: Interactive multiobjective analysis and assimilative capacity-based ocean disposal decisions. European J. Oper. Res. 56, 278–289 (1992)
Lipovetsky, S., Conklin, W.M.: Ridge regression in two-parameter solution. Appl. Stoch. Models Bus. Ind. 21, 525–540 (2005)
Liuzzi, G., Lucidi, S., Rinaldi, F.: A derivative free approach to constrained multiobjective nonsmooth optimization. SIAM J. Optim. 26(4), 2744–2774 (2016)
Morovati, V., Pourkarimi, L.: Extension of Zoutendijk method for solving constrained multiobjective optimization problems. Eur. J. Operat. Res. 273(1), 44–57 (2019)
Preuss, M., Naujoks, B., Rudolph, G.: Pareto set and EMOA behavior for simple multimodal multiobjective functions, In: Proceedings of the Ninth International Conference on Parallel Problem Solving from Nature (PPSN IX), Runarsson, T. P. et al., (eds.), Springer, Berlin, pp. 513–522 (2006)
Ryu, J.H., Kim, S.: A derivative-free trust-region method for biobjective optimization. SIAM J. Optim. 24, 334–362 (2014)
Schreibmann, E., Lahanas, M., Xing, L., Baltas, D.: Multiobjective evolutionary optimization of the number of beams, their orientations and weights for intensity-modulated radiation therapy. Phys. Med. Biol. 49, 747–770 (2004)
Wang, J., Hu, Y., Yu, C.K.W., Li, C., Yang, X.: Extened Newton methods for multiobjective optimization: majirizing function technique and convergence analysis. SIAM J. Optim. 29(3), 2388–2421 (2019)
Wiecek, M.M.: Advances in cone-based preference modeling for decision making with multiple criteria. Decis. Mak. Manuf. Serv. 1, 153–173 (2007)
Zhang, H., Conn, A.R., Scheinberg, K.: A derivative-free algorithm for least-squares minimization, SIAM. J. Optim. 20, 3555–3576 (2010)
Zhang, H., Conn, A.R.: On the local convergence of a derivative-free algorithm for least-squares minimization. Comput. Optim. Appl. 51, 481–507 (2012)
Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evolut. Comput. 8, 173–195 (2000)
Acknowledgements
This work has been partially supported by National Natural Science Foundation (Grant number: 11371253), Hainan Natural Science Foundation (Grant number: 120MS029) and The Science Foundation Grant of Provincial Education Department of Hunan (Grant number: 18A351).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Wang, P., Zhu, D. Combined gradient methods for multiobjective optimization. J. Appl. Math. Comput. 68, 2717–2741 (2022). https://doi.org/10.1007/s12190-021-01636-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12190-021-01636-4