Skip to main content
Log in

A semismooth Newton based dual proximal point algorithm for maximum eigenvalue problem

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

The maximum eigenvalue problem is to minimize the maximum eigenvalue function over an affine subspace in a symmetric matrix space, which has many applications in structural engineering, such as combinatorial optimization, control theory and structural design. Based on classical analysis of proximal point (Ppa) algorithm and semismooth analysis of nonseparable spectral operator, we propose an efficient semismooth Newton based dual proximal point (Ssndppa) algorithm to solve the maximum eigenvalue problem, in which an inexact semismooth Newton (Ssn) algorithm is applied to solve inner subproblem of the dual proximal point (d-Ppa) algorithm. Global convergence and locally asymptotically superlinear convergence of the d-Ppa algorithm are established under very mild conditions, and fast superlinear or even quadratic convergence of the Ssn algorithm is obtained when the primal constraint nondegeneracy condition holds for the inner subproblem. Computational costs of the Ssn algorithm for solving the inner subproblem can be reduced by fully exploiting low-rank or high-rank property of a matrix. Numerical experiments on max-cut problems and randomly generated maximum eigenvalue optimization problems demonstrate that the Ssndppa algorithm substantially outperforms the Sdpnal+ solver and several state-of-the-art first-order algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Data Availability

The datasets generated during and/or analysed during the current study are available in https://sparse.tamu.edu/Gset.

Notes

  1. https://sparse.tamu.edu/Gset.

References

  1. Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information. Cambridge University Press, Cambridge (2002)

    MATH  Google Scholar 

  2. Hiriart-Urruty, J.B., Ye, D.: Sensitivity analysis of all eigenvalues of a symmetric matrix. Numer. Math. 70(1), 45–72 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  3. Oustry, F.: A second-order bundle method to minimize the maximum eigenvalue function. Math. Program. 89(1), 1–33 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  4. Hiriart-Urruty, J.B., Lemaréchal, C.: Fundamentals of Convex Analysis. Springer, New York (2001)

    Book  MATH  Google Scholar 

  5. Cox, S.J., Overton, M.L.: On the optimal design of columns against buckling. SIAM J. Math. Anal. 23(2), 287–325 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  6. Thore, C.-J.: Multiplicity of the maximum eigenvalue in structural optimization problems. Struct. Multidiscip. Optim. 53(5), 961–965 (2016)

    Article  MathSciNet  Google Scholar 

  7. Chen, X., Qi, H.D., Qi, L.Q., Teo, K.-L.: Smooth convex approximation to the maximum eigenvalue function. J. Glob. Optim. 30(2), 253–270 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  8. Helmberg, C., Rendl, F.: A spectral bundle method for semidefinite programming. SIAM J. Optim. 10(3), 673–696 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  9. Toh, K.-C., Todd, M.J., T\(\ddot{\text{u}}\)t\(\ddot{\text{ u }}\)nc\(\ddot{\text{ u }}\), R.H.: SDPT3—a Matlab software package for semidefinite programming, version 1.3. Optim. Method. Softw. 11(1–4), 545–581 (1999)

  10. Sturm, J.F.: Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. Optim. Method. Softw. 11(1–4), 625–653 (1999)

  11. Zhao, X.Y., Sun, D.F., Toh, K.-C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20(4), 1737–1765 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  12. Sun, D.F., Toh, K.-C., Yuan, Y.C., Zhao, X.Y.: SDPNAL+: a Matlab software for semidefinite programming with bound constraints (version 1.0). Optim. Method. Softw. 35(1), 87–115 (2020)

  13. Yang, L.Q., Sun, D.F., Toh, K.-C.: SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math. Program. Comput. 7(3), 331–366 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  14. Overton, M.L.: Large-scale optimization of eigenvalues. SIAM J. Optim. 2(1), 88–120 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  15. Lewis, A.S.: Derivatives of spectral functions. Math. Oper. Res. 21(3), 576–588 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  16. Chen, C.H., Liu, Y.-J., Sun, D.F., Toh, K.-C.: A semismooth Newton-CG based dual PPA for matrix spectral norm approximation problems. Math. Program. 155(1), 435–470 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  17. Jiang, K.F., Sun, D.F., Toh, K.-C.: Solving nuclear norm regularized and semidefinite matrix least squares problems with linear equality constraints. Discrete Geom. Optim. 69, 133–162 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Jiang, K.F., Sun, D.F., Toh, K.-C.: A partial proximal point algorithm for nuclear norm regularized matrix least squares problems. Math. Program. Comput. 6(3), 281–325 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  19. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2(1), 17–40 (1976)

    Article  MATH  Google Scholar 

  20. Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problémes de dirichlet non linéaires. Revue francaise d’automatique, informatique, recherche opérationnelle. Analyse numérique 9(R2), 41–76 (1975)

  21. Bot, R.I., Nguyen, D.K.: The proximal alternating direction method of multipliers in the nonconvex setting: convergence analysis and rates. Math. Oper. Res. 45(2), 682–712 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  22. Ouyang, Y., Chen, Y., Lan, G., Pasiliao, J.E.: An accelerated linearized alternating direction method of multipliers. SIAM J. Imaging Sci. 8(1), 644–681 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  23. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, New York (2009)

    MATH  Google Scholar 

  24. Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)

    MATH  Google Scholar 

  25. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  26. Moreau, J.J.: Proximité et dualité dans un espace hilbertien. B. Soc. Math. Fr. 93, 273–299 (1965)

    Article  MATH  Google Scholar 

  27. Ruszczyński, A.: Nonlinear Optimization. Princeton University Press, Princeton (2006)

    Book  MATH  Google Scholar 

  28. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control. Optim. 14(5), 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  29. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  30. Held, M., Wolfe, P., Crowder, H.P.: Validation of subgradient optimization. Math. Program. 6(1), 62–88 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  31. Liu, Y.-J., Yu, J.: A semismooth Newton-based augmented Lagrangian algorithm for density matrix least squares problems. J. Optim. Theory Appl. (2022). https://doi.org/10.1007/s10957-022-02120-0

    Article  MathSciNet  MATH  Google Scholar 

  32. Ding, C.: An introduction to a class of matrix optimization problems. Ph.D. Thesis, National University of Singapore (2012)

  33. Ding, C., Sun, D.F., Sun, J., Toh, K.-C.: Spectral operators of matrices. Math. Program. 168(1), 509–531 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  34. Artacho, F.J.A., Geoffroy, M.H.: Characterization of metric regularity of subdifferentials. J. Convex Anal. 15(2), 365–380 (2008)

    MathSciNet  MATH  Google Scholar 

  35. Lewis, A.S., Sendov, H.S.: Twice differentiable spectral functions. SIAM J. Matrix Anal. Appl. 23(2), 368–386 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  36. Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings. Springer, New York (2009)

    Book  MATH  Google Scholar 

  37. Robinson, S.M.: Some continuity properties of polyhedral multifunctions. Math. Program. Ober. 14, 206–214 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  38. Cui, Y., Ding, C., Li, X.D., Zhao, X.Y.: Augmented Lagrangian methods for convex matrix optimization problems. J. Oper. Res. Soc. China 10(2), 305–342 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  39. Cui, Y., Sun, D.F., Toh, K.-C.: On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming. Math. Program. 178(1), 381–415 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  40. Guo, H.: The metric subregularity of KKT solution mappings of composite conic programming. Ph.D. Thesis, National University of Singapore (2017)

  41. Hiriart-Urruty, J.B., Strodiot, J.J., Nguyen, V.H.: Generalized Hessian matrix and second-order optimality conditions for problems with \({C}^{1,1}\) data. Appl. Math. Optim. 11(1), 43–56 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  42. Meng, F.W., Sun, D.F., Zhao, G.Y.: Semismoothness of solutions to generalized equations and the Moreau–Yosida regularization. Math. Program. 104(2), 561–581 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  43. Cui, Y., Sun, D.F., Toh, K.-C.: On the asymptotic superlinear convergence of the augmented Lagrangian method for semidefinite programming with multiple solutions. Math. OC (2016). arXiv:1610.00875

  44. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)

    Book  MATH  Google Scholar 

  45. Goemans, M.X., Williamson, D.P.: Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 42(6), 1115–1145 (1995)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The work of Yong-Jin Liu was in part supported by the National Natural Science Foundation of China (Grants Nos. 11871153 and 12271097) and the Natural Science Foundation of Fujian Province of China (Grant No. 2019J01644).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong-Jin Liu.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Jing Yu have contributed equally to this work.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, YJ., Yu, J. A semismooth Newton based dual proximal point algorithm for maximum eigenvalue problem. Comput Optim Appl 85, 547–582 (2023). https://doi.org/10.1007/s10589-023-00467-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-023-00467-2

Keywords

Mathematics Subject Classification

Navigation