Skip to main content
Log in

Robust least square semidefinite programming with applications

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper, we consider a least square semidefinite programming problem under ellipsoidal data uncertainty. We show that the robustification of this uncertain problem can be reformulated as a semidefinite linear programming problem with an additional second-order cone constraint. We then provide an explicit quantitative sensitivity analysis on how the solution under the robustification depends on the size/shape of the ellipsoidal data uncertainty set. Next, we prove that, under suitable constraint qualifications, the reformulation has zero duality gap with its dual problem, even when the primal problem itself is infeasible. The dual problem is equivalent to minimizing a smooth objective function over the Cartesian product of second-order cones and the Euclidean space, which is easy to project onto. Thus, we propose a simple variant of the spectral projected gradient method (Birgin et al. in SIAM J. Optim. 10:1196–1211, 2000) to solve the dual problem. While it is well-known that any accumulation point of the sequence generated from the algorithm is a dual optimal solution, we show in addition that the dual objective value along the sequence generated converges to a finite value if and only if the primal problem is feasible, again under suitable constraint qualifications. This latter fact leads to a simple certificate for primal infeasibility in situations when the primal feasible set lies in a known compact set. As an application, we consider robust correlation stress testing where data uncertainty arises due to untimely recording of portfolio holdings. In our computational experiments on this particular application, our algorithm performs reasonably well on medium-sized problems for real data when finding the optimal solution (if exists) or identifying primal infeasibility, and usually outperforms the standard interior-point solver SDPT3 in terms of CPU time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We remark that many popular methods such as Newton type method described in [17], accelerated proximal gradient methods (see, for example, [2528, 37]) and the alternating direction method of multipliers (see, for example, [10, 12, 14, 15, 18, 19, 41]), could also be suitably adapted to solve (DSDP) when (RSDP) is feasible. However, it is not immediate to us how primal infeasibility can be readily certified in these algorithms.

  2. We have chosen SDPT3 since it is an off-the-shelf SDP solver and implements a second-order method that returns solutions with high accuracy; this latter point is important for benchmark purpose and for understanding the behavior of a new model.

References

  1. Ben-Tal, A., Nemirovski, A.: Robust solutions of linear programming problems contaminated with uncertain data. Math. Program. 88, 411–424 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  2. Ben-Tal, A., Nemirovski, A.: Selected topics in robust convex optimization. Math. Program. 112, 125–158 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  3. Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton Series in Applied Mathematics. Princeton University Press, Princeton (2009)

    MATH  Google Scholar 

  4. Berkowitz, J.: A coherent framework for stress testing. J. Risk 2(2), 5–15 (1999)

    Google Scholar 

  5. Bertsimas, D., Brown, D.B.: Constructing uncertainty sets for robust linear optimization. Oper. Res. 57, 1483–1495 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  6. Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53, 464–501 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  7. Birge, J.R., Louveaux, F.: Introduction to Stochastic Programming. Springer, Berlin (1997)

    MATH  Google Scholar 

  8. Birgin, E.G., Martínez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10, 1196–1211 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  9. Chou, C.-C., Ng, K.-F., Pang, J.-S.: Minimizing and stationary sequences of constrained optimization problems. SIAM J. Control Optim. 36, 1908–1936 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  10. Eckstein, J., Bertsekas, D.P.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Prog. 55, 293–318 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  11. El Ghaoui, L., Lebret, H.: Robust solutions to least-squares problems with uncertain data. SIAM J. Matrix Anal. Appl. 18, 1035–1064 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  12. Fortin, M., Glowinski, R.: On decomposition-coordination methods using an augmented Lagrangian. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrangion Methods: Applications to the Solution of Boundary Problems. North-Holland, Amsterdam (1983)

    Google Scholar 

  13. Fukushima, M., Luo, Z.-Q., Tseng, P.: Smoothing functions for second-order-cone complementarity problems. SIAM J. Optim. 12, 436–460 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  14. Gabay, D.: Applications of the method of multipliers to variational inequalities. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrangion Methods: Applications to the Solution of Boundary Problems. North-Holland, Amsterdam (1983)

    Google Scholar 

  15. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl. 2, 17–40 (1976)

    Article  MATH  Google Scholar 

  16. Gafni, E.M., Bertsekas, D.P.: Two metric projection methods for constrained optimization. SIAM J. Control Optim. 22, 936–964 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  17. Gao, Y., Sun, D.: Calibrating least squares semidefinite programming with equality and inequality constraints. SIAM J. Matrix Anal. Appl. 31, 1432–1457 (2009)

    Article  MathSciNet  Google Scholar 

  18. Glowinski, R., Marroco, A.: Sur l’approximation, par elements finis d’ordre un, et la resolution, par penalisation-dualit’e, d’une classe de problemes de Dirichlet non lineares. Revue Francaise d’Automatique Informatique et Recherche Op’erationelle 9(r–2), 41–76 (1975)

    MATH  Google Scholar 

  19. He, B., Liao, L., Han, D., Yang, H.: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92, 103–118 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  20. Hu, H.: Geometric condition measures and smoothness condition measures for closed convex sets and linear regularity of infinitely many closed convex sets. J. Optim. Theory Appl. 126, 287–308 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  21. Kim, J., Finger, C.C.: A stress test to incorporate correlation breakdown. J. Risk 2(3), 5–19 (2000)

    Google Scholar 

  22. Kupiec, P.H.: Stress testing in a value at risk framework. J. Deriv. 6, 7–24 (1998)

    Article  Google Scholar 

  23. Li, C., Ng, K.F., Pong, T.K.: The SECQ, linear regularity, and the strong CHIP for an infinite system of closed convex sets in normed linear spaces. SIAM J. Optim. 18, 643–665 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  24. Lu, Z., Zhang, Y.: An augmented Lagrangian approach for sparse principal component analysis. Math. Program. 135, 149–193 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  25. Nesterov, Y.: A method for solving a convex programming problem with convergence rate O(1/k 2). Sov. Math. Dokl. 27(2), 372–376 (1983)

    MATH  Google Scholar 

  26. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic, Amsterdam (2003)

    Google Scholar 

  27. Nesterov, Y.: Excessive gap technique in nonsmooth convex minimization. SIAM J. Optim. 16, 235–249 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  28. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103, 127–152 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  29. Qi, H., Sun, D.: Correlation stress testing for value-at-risk: an unconstrained convex optimization approach. Comput. Optim. Appl. 45, 427–462 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  30. Rebonato, R., Jäckel, P.: The most general methodology for creating a valid correlation matrix for risk management and option pricing purposes. J. Risk 2(2), 17–27 (1999)

    Google Scholar 

  31. Robinson, S.M.: An application of error bounds for convex programming in a linear space. SIAM J. Control 13, 271–273 (1975)

    Article  MATH  MathSciNet  Google Scholar 

  32. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  33. Shapiro, A., Dentcheva, D., Ruszczynski, A.: Lectures on Stochastic Programming: Modeling and Theory. SIAM, Philadelphia (2009)

    Book  Google Scholar 

  34. Soyster, A.L.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 21, 1154–1157 (1973)

    Article  MATH  MathSciNet  Google Scholar 

  35. Toh, K.C., Todd, M.J., Tütüncü, R.H.: SDPT3—a Matlab software package for semidefinite programming. Optim. Method Softw. 11, 545–581 (1999)

    Article  Google Scholar 

  36. Toint, Ph.L.: Global convergence of a class of trust-region methods for nonconvex minimization in Hilbert space. IMA J. Numer. Anal. 8, 231–252 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  37. Tseng, P.: Approximation accuracy, gradient methods, and error bound for structured convex optimization. Math. Program. 125, 263–295 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  38. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117, 387–423 (2007)

    Article  MathSciNet  Google Scholar 

  39. Turkay, S., Epperlein, E., Christofides, N.: Correlation stress testing for value-at-risk. J. Risk 5(4), 75–89 (2003)

    Google Scholar 

  40. Tütüncü, R.H., Toh, K.C., Todd, M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. Series B 95, 189–217 (2003)

    Article  MATH  Google Scholar 

  41. Yang, J., Zhang, Y.: Alternating direction algorithms for 1-problems in compressive sensing. SIAM J. Sci. Comput. 33, 250–278 (2011)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ting Kei Pong.

Additional information

G. Li author was partially supported by a research grant from Australian Research Council.

T.K. Pong author was supported by research grants from AFOSR and NSERC.

Appendix: Proof of Lemma 2.1

Appendix: Proof of Lemma 2.1

Proof

By a translation, we may assume without loss of generality that X 0=0. For a closed convex set Ω, let σ Ω denote the usual support function of Ω (that is, σ Ω (X)=sup YΩ tr(YX) for all \(X \in {\mathcal{S}}^{n}\)) and let epiσ Ω be the epigraph of the support function. Now, notice that if \((Y_{1},\alpha_{1})\in {\mathrm{epi}}\,\sigma_{\varOmega_{1}}\) and \((Y_{2},\alpha_{2})\in {\mathrm{epi}}\,\sigma_{\varOmega_{2}}\) are such that \(\alpha_{1}+\alpha_{2}=\sigma_{\varOmega_{1}\cap \varOmega_{2}}(Y_{1}+Y_{2})\) and ∥Y 1+Y 2 F ≤1, then we have \(0\le \sigma_{\varOmega_{1}}(Y_{1})\le \alpha_{1}\) and \(\alpha_{2}\le \sigma_{\varOmega_{1}\cap \varOmega_{2}}(Y_{1}+Y_{2})\le R\). From these we obtain further that \(\delta\|Y_{2}\|_{F}\le \sigma_{\varOmega_{2}}(Y_{2})\le \alpha_{2}\le R\) and consequently

$$\|Y_1\|_F\le \|Y_1+Y_2 \|_F+\|Y_2\|_F\le 1+\frac{R}{\delta}. $$

The proof of this lemma now follows similarly as in [23, Lemma 4.10], making use of the bounds derived above. □

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, G., Ma, A.K.C. & Pong, T.K. Robust least square semidefinite programming with applications. Comput Optim Appl 58, 347–379 (2014). https://doi.org/10.1007/s10589-013-9634-8

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-013-9634-8

Keywords

Navigation