Skip to main content

Advertisement

Log in

An accelerated active-set algorithm for a quadratic semidefinite program with general constraints

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper, we are concerned with efficient algorithms for solving the least squares semidefinite programming which contains many equalities and inequalities constraints. Our proposed method is built upon its dual formulation and is a type of active-set approach. In particular, by exploiting the nonnegative constraints in the dual form, our method first uses the information from the Barzlai–Borwein step to estimate the active/inactive sets, and within an adaptive framework, it then accelerates the convergence by switching the L-BFGS iteration and the semi-smooth Newton iteration dynamically. We show the global convergence under mild conditions, and furthermore, the local quadratic convergence under the additional nondegeneracy condition. Various types of synthetic as well as real-world examples are tested, and preliminary but promising numerical experiments are reported.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. The initial inverse Hessian approximation in L-BFGS is generally set to be \(\zeta ^k I\) with \(\zeta ^k=\frac{(s^{k-1})^Tw^{k-1}}{(w^{k-1})^Tw^{k-1}}\).

  2. Codes of the approaches of ISNM and P-BFGS are available online at http://www.math.nus.edu.sg/~matsundf/, and https://ctk.math.ncsu.edu/matlab_darts.html, respectively.

  3. We remark that \(\varepsilon\) is set to be \(10^{-8}\) only in this subsubsection because it is beneficial to observe the quadratic convergence rate from numerical results. In the rest numerical experiments, \(\varepsilon\) is still set to be \(10^{-6}\).

  4. One may observe that the convergence in the final stage in Tables 2 and 3 is not indeed quadratic. This is due to the use of CG for the generalized Newton system where only a properly accurate approximation solution is computed.

References

  1. Barzilai, J., Borwein, J.M.: Two point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  2. Boyd, S., Xiao, L.: Least squares covariance matrix adjustment. SIAM J. Matrix Anal. Appl. 27, 532–546 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  3. Chen, X., Qi, H., Tseng, P.: Analysis of nonsmooth symmetric-matrix-valued functions with applications to semidefinite complementarity problems. SIAM J. Optim. 13, 960–985 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  4. Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)

    MATH  Google Scholar 

  5. Dai, Y.H.: Alternate step gradient method. Optimization 52, 395–415 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  6. Facchinei, F.: Minimization of SC\(^1\) functions and the Maratos effect. Oper. Res. Lett. 17(3), 131–137 (1995)

    MathSciNet  MATH  Google Scholar 

  7. Facchinei, F., Fischer, A., Kanzow, C.: On the accurate identification of active constraints. SIAM J. Optim. 9(1), 14–32 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  8. Gabay, D.: Application of the method of multipliers to variational inequalities. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrangian Methods: Application to the Numerical Solution of Boundary-Value Problems, pp. 299–331. North-Holland, Amsterdam (1983)

    Chapter  Google Scholar 

  9. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl. 2, 17–40 (1976)

    Article  MATH  Google Scholar 

  10. Gao, Y., Sun, D.F.: Calibrating least squares semidefinite programming with equality and inequality constraints. SIAM J. Matrix Anal. Appl. 31, 1432–1457 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Han, R.Q., Xie, W.J., Xiong, X.: Market correlation structure changes around the great crash: a random matrix theory analysis of the Chinese stock market. Fluct. Noise Lett. 16(02), 1750018 (2017)

    Article  Google Scholar 

  12. He, B.S., Xu, M.H., Yuan, X.M.: Solving large-scale least squares covariance matrix problems by alternating direction methods. SIAM J. Matrix Anal. Appl. 32, 136–152 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Kupiec, P.H.: Stress testing in a value-at-risk framework. J. Deriv. 6, 7–24 (1998)

    Article  Google Scholar 

  14. Kelley, C.T.: Iterative Methods for Optimization, pp. 102–104. SIAM, Philadelphia (1999)

    Book  MATH  Google Scholar 

  15. Li, Q.N., Li, D.H.: A projected semi-smooth Newton method for problems of calibrating least squares covariance matrix. Oper. Res. Lett. 39, 103–108 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  16. Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large-scale optimization. Math. Program. 45, 503–528 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  17. Malick, J.: A dual approach to semidefinite least squares problems. SIAM J. Matrix Anal. Appl. 26, 272–284 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  18. Nobi, A., Maeng, S.E., Ha, G.G., Lee, J.W.: Random matrix theory and cross-correlations in global financial indices and local stock market indices. J. Korean Phys. Soc. 62(4), 569–574 (2013)

    Article  Google Scholar 

  19. Nobi, A., Maeng, S.E., Ha, G.G., Lee, J.W.: Effects of global financial crisis on network structure in a local stock market. Phys. A 407, 135–143 (2014)

    Article  Google Scholar 

  20. Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  21. Qi, L.Q.: Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 18, 227–244 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  22. Qi, L.Q.: Superlinearly convergent approximate Newton methods for LC\(^1\) optimization problems. Math. Program. 64(1–3), 277–294 (1994)

    MathSciNet  MATH  Google Scholar 

  23. Qi, L.Q., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58, 353–367 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  24. Qi, H.D., Sun, D.F.: A quadratically convergent Newton method for computing the nearest correlation matrix. SIAM J. Matrix Anal. Appl. 28, 360–385 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  25. Rockafellar, R.T.: Conjugate Duality and Optimization. SIAM, Philadelphia (1974)

    Book  MATH  Google Scholar 

  26. Schwertman, N.C., Allen, D.M.: Smoothing an indefinite variance–covariance matrix. J. Stat. Comput. Simul. 9, 183–194 (1979)

    Article  Google Scholar 

  27. Shen, C.G., Fan, C.X., Wang, Y.L., Xue, W.J.: Limited memory BFGS algorithm for the matrix approximation problem in Frobenius norm. Comput. Appl. Math. 39, 43 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  28. So, M.K.P., Wang, J., Asai, M.: Stress testing correlation matrices for risk management. North Am. J. Econ. Finance 26, 310–322 (2013)

    Article  Google Scholar 

  29. Sornette, D.: Critical market crashes. Phys. Rep. 378(1), 1–98 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  30. Sornette, D.: Why Stock Markets Crash: Critical Events in Complex Financial Systems. Princeton University Press, Princeton (2017)

    Book  MATH  Google Scholar 

  31. Sturm, J.F.: Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 11, 625–653 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  32. Sun, D.F., Sun, J.: Semi-smooth matrix valued functions. Math. Oper. Res. 27, 150–169 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  33. Sun, Y.F., Vandenberghe, L.: Decomposition methods for sparse matrix nearness problems. SIAM J. Matrix Anal. Appl. 36, 1691–1717 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  34. Tütüncü, R.H., Toh, K.C., Todd, M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. 95, 189–217 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  35. Ye, C.H., Yuan, X.M.: A descent method for structured monotone variational inequalities. Optim. Methods Softw. 22, 329–338 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  36. Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 14, 1043–1056 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  37. Zhou, B., Gao, L., Dai, Y.H.: Gradient methods with adaptive step-sizes. Comput. Optim. Appl. 35, 69–86 (2006)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Associate Editor and anonymous referees for their helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chungen Shen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Wenjuan Xue: The work of this author was supported in part by the National Natural Science Foundation of China NSFC-11601318. Lei-Hong Zhang: The work of this author was supported in part by the National Natural Science Foundation of China (NSFC-11671246, NSFC-12071332), the National Key R&D Program of China (No. 2018YFB0204404) and Double Innovation Program of Jiangsu Province, Year 2018.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, C., Wang, Y., Xue, W. et al. An accelerated active-set algorithm for a quadratic semidefinite program with general constraints. Comput Optim Appl 78, 1–42 (2021). https://doi.org/10.1007/s10589-020-00228-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00228-5

Keywords

Mathematics Subject Classification

Navigation