Abstract
In this paper we consider a mathematical program with semidefinite cone complementarity constraints (SDCMPCC). Such a problem is a matrix analogue of the mathematical program with (vector) complementarity constraints (MPCC) and includes MPCC as a special case. We first derive explicit formulas for the proximal and limiting normal cone of the graph of the normal cone to the positive semidefinite cone. Using these formulas and classical nonsmooth first order necessary optimality conditions we derive explicit expressions for the strong-, Mordukhovich- and Clarke- (S-, M- and C-)stationary conditions. Moreover we give constraint qualifications under which a local solution of SDCMPCC is a S-, M- and C-stationary point. Moreover we show that applying these results to MPCC produces new and weaker necessary optimality conditions.
Similar content being viewed by others
References
Aubin, J.-P.: Lipschitz behavior of solutions to convex minimization problems. Math. Oper. Res. 9, 87–111 (1984)
Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23, 769–805 (1998)
Ben-Tal, A., Nemirovski, A.: Robust convex optimization-methodology and applications. Math. Program. 92, 453–480 (2002)
Bhatia, R.: Matrix Analysis. Springer, New York (1997)
Bi, S., Han, L., Pan, S.: Approximation of rank function and its application to the nearest low-rank correlation matrix. J. Glob. Optim. 57, 1113–1137 (2013)
Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)
Brigo, D., Mercurio, F.: Calibrating LIBOR. Risk Mag. 15, 117–122 (2002)
Burge, J.P., Luenberger, D.G., Wenger, D.L.: Estimation of structured covariance matrices. Proc. IEEE 70, 963–974 (1982)
Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley-Interscience, New York (1983)
Clarke, F.H., Ledyaev, Yu.S, Stern, R.J., Wolenski, P.R.: Nonsmooth Analysis and Control Theory. Springer, New York (1998)
de Gaston, R.R.E., Safonov, M.G.: Exact calculation of the multiloop stability margin. IEEE Trans. Autom. Control 33, 156–171 (1988)
Dempe, S.: Foundations of Bilevel Programming. Kluwer, Berlin (2002)
Eaves, B.C.: On the basic theorem for complementarity. Math. Program. 1, 68–75 (1971)
Faccchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problem. Springer, New York (2003)
Fazel, M.: Matrix Rank Minimization with Applications. PhD thesis, Stanford University (2002)
Fletcher, R.: Semi-definite matrix constraints in optimization. SIAM J. Control Optim. 23, 493–513 (1985)
Flegel, M.L., Kanzow, C.: On the Guignard constraint qualification for mathematical programs with equilibrium constraints. Optimization 54, 517–534 (2005)
Goh, K.C., Ly, J.C., Safonov, M.G., Papavassilopoulos, G., Turan, L.: Biaffine matrix inequality properties and computational methods. In: Proceeding of the American Control Conference, Baltimore, Maryland, pp. 850–855 (1994)
Henrion, R., Outrata, J.: On the calmness of a class of multifunctions. SIAM J. Optim. 13, 603–618 (2002)
Henrion, R., Outrata, J.: Calmness of constraint systems with applications. Math. Program. Ser. B 104, 437–464 (2005)
Hobbs, B.F., Metzler, C.B., Pang, J.S.: Strategic gaming analysis for electric power systems: an MPEC approach. IEEE Trans. Power Syst. 15, 638–645 (2000)
Lewis, A.S.: Nonsmooth analysis of eigenvalues. Math. Program. 84, 1–24 (1999)
Li, Q.N., Qi, H.D.: A sequential semismooth Newton method for the nearest low-rank correlation matrix problem. SIAM J. Optim. 21, 1641–1666 (2011)
Lillo, F., Mantegna, R.N.: Spectral density of the correlation matrix of factor models: a random matrix theory approach. Phys. Rev. E 72, 016219-1–016219-10 (2005)
Löwner, K.: Über monotone matrixfunktionen. Mathematische Zeitschrift 38, 177–216 (1934)
Luo, Z.Q., Pang, J.S., Ralph, D.: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge (1996)
Hiriart-Urruty, J.-B., Ye, D.: Sensitivity analysis of all eigenvalues of a symmetric matrix. Numerische Mathematik 70, 45–72 (1995)
Hoge, W.: A subspace identification extension to the phase correlation method. IEEE Trans. Med. Imaging 22, 277–280 (2003)
Meng, F., Sun, D.F., Zhao, G.Y.: Semismoothness of solutions to generalized equations and Moreau-Yosida regularization. Mathe. Program. 104, 561–581 (2005)
Mordukhovich, B.S.: Generalized differential calculus for nonsmooth and set-valued mappings. J. Math. Anal. Appl. 183, 250–288 (1994)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, I: Basic Theory, Grundlehren Series (Fundamental Principles of Mathematical Sciences), vol. 330. Springer, Berlin (2006)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, II: Applications, Grundlehren Series (Fundamental Principles of Mathematical Sciences), vol. 331. Springer, Berlin (2006)
Mordukhovich, B.S., Shao, Y.: Nonsmooth sequential analysis in Asplund space. Trans. Am. Math. Soc. 348, 215–220 (1996)
Outrata, J.V., Koc̆vara, M., Zowe, J.: Nonsmooth Approach to Optimization Problem with Equilibrium Constraints: Theory, Application and Numerical Results. Kluwer, Dordrecht (1998)
Overton, M., Womersley, R.S.: On the sum of the largest eigenvalues of a symmetric matrix. SIAM J. Matrix Anal. Appl. 13, 41–45 (1992)
Overton, M., Womersley, R.S.: Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices. Math. Program. 62, 321–357 (1993)
Psarris, P., Floudas, C.A.: Robust stability analysis of linear and nonlinear systems with real parameter uncertainty. AIChE Annual Meeting, p. 127e. Florida, Miami Beach (1992)
Qi, H.D., Fusek, P.: Metric regularity and strong regularity in linear and nonlinear semidefinite programming. Technical Report, School of Mathematics, University of Southampton (2007)
Robinson, S.M.: Stability theory for systems of inequalities, part I: linear systems. SIAM J. Numer. Anal. 12, 754–769 (1975)
Robinson, S.M.: Stability theory for systems of inequalities, part II: nonlinear systems. SIAM J. Numer. Anal. 13, 473–513 (1976)
Robinson, S.M.: First order conditions for general nonlinear optimization. SIAM J. Appl. Math. 30, 597–607 (1976)
Robinson, S.M.: Some continuity properties of polyhedral multifunctions. Math. Program. Stud. 14, 206–214 (1981)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)
Ryoo, H.S., Sahinidis, N.V.: Global optimization of nonconvex NLPs and MINLPs with applications in process design. Comput. Chem. Eng. 19, 551–566 (1995)
Safonov, M.G., Goh, K.C., Ly, J.H.: Control system synthesis via bilinear matrix inequalities. In: Proceeding of the American Control Conference, pp. 45–49. Baltimore, Maryland (1994)
Scheel, H., Scholtes, S.: Mathematical programs with complementarity constraints: stationarity, optimality and sensitivity. Math. Oper. Res. 25, 1–22 (2000)
Simon, D.: Reduced order Kalman filtering without model reduction. Control Intell. Syst. 35, 169–174 (2007)
Sun, D.F.: The strong second-order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math. Oper. Res. 31, 761–776 (2006)
Sun, D.F., Sun, J.: Semismooth matrix valued functions. Math. Oper. Res. 27, 150–169 (2002)
Sun, D.F., Sun, J.: Strong semismoothness of eigenvalues of symmetric matrices and its applications in inverse eigenvalue problems. SIAM J. Numer. Anal. 40, 2352–2367 (2003)
Tsing, N.K., Fan, M.K.H., Verriest, E.I.: On analyticity of functions involving eigenvalues. Linear Algebra Appl. 207, 159–180 (1994)
VanAntwerp, J.G., Braatz, R.D., Sahinidis, N.V.: Globally optimal robust control for systems with nonlinear time-varying perturbations. Comput. Chem. Eng. 21, S125–S130 (1997)
Visweswaran, V., Floudas, C.A.: A global optimization algorithm (GOP) for certain classes of nonconvex NLPs—I. Theory. Comput. Chem. Eng. 14, 1397–1417 (1990)
Visweswaran, V., Floudas, C.A.: A global optimization algorithm (GOP) for certain classes of nonconvex NLPs—II. Application of theory and test problems. Comput. Chem. Eng. 14, 1419–1434 (1990)
Wu, L.X.: Fast at-the-money calibration of the LIBOR market model using Lagrange multipliers. J Comput. Financ. 6, 39–77 (2003)
Wu, Z., Ye, J.J.: First and second order condition for error bounds. SIAM J. Optim. 14, 621–645 (2003)
Yan, T., Fukushima, M.: Smoothing method for mathematical programs with symmetric cone complementarity constraints. Optimization 60, 113–128 (2011)
Ye, J.J.: Optimality conditions for optimization problems with complementarity constraints. SIAM J. Optim. 9, 374–387 (1999)
Ye, J.J.: Constraint qualifications and necessary optimality conditions for optimization problems with variational inequality constraints. SIAM J. Optim. 10, 943–962 (2000)
Ye, J.J.: Necessary and sufficient optimality conditions for mathematical programs with equilibrium constraints. J. Math. Anal. Appl. 307, 305–369 (2005)
Ye, J.J., Ye, X.Y.: Necessary optimality conditions for optimization problems with variational inequality constraints. Math. Oper. Res. 22, 977–977 (1997)
Ye, J.J., Zhu, D.L., Zhu, Q.J.: Exact penalization and necessary optimality conditions for generalized bilevel programming problems. SIAM J. Optim. 7, 481–507 (1997)
Zhang, Z.Y., Wu, L.X.: Optimal low-rank approximation to a correlation matrix. Linear Algebra Appl. 364, 161–187 (2003)
Zhao, Y.B.: An approximation theory of matrix rank minimization and its application to quadratic equations. Linear Algebra Appl. 437, 77–93 (2012)
Acknowledgments
The authors are grateful to the anonymous referees for their constructive suggestions and comments which helped to improve the presentation of the materials in this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
D. Sun’s research is supported in part by Academic Research Fund under grant R-146-000-149-112.
The research of J. J. Ye was partially supported by NSERC.
Part of this work was done while C. Ding was with Department of Mathematics, National University of Singapore. The research of this author is supported by the National Science Foundation for Distinguished Young Scholars of China (Grant No. 11301515).
Appendix
Appendix
Proof of Proposition 2.6
Firstly, we will show that (16) holds for the case that \(A=\Lambda (A)\). For any \(H\in \mathcal{S}^{n}\), denote \(Y:=A+H\). Let \(P\in \mathcal{O}^{n}\) (depending on \(H\)) be such that
Let \(\delta >0\) be any fixed number such that \(0<\delta <\frac{\lambda _{|\alpha |}}{2}\) if \(\alpha \ne \emptyset \) and be any fixed positive number otherwise. Then, define the following continuous scalar function
Therefore, we have
For the scalar function \(f\), let \(F:\mathcal{S}^{n}\rightarrow \mathcal{S}^{n}\) be the corresponding Löwner’s operator [25], i.e., for any \(Z\in \mathcal{S}^{n}\),
where \(U\in \mathcal{O}^{n}\) satisfies that \(Z=U\Lambda (Z)U^{T}\). Since \(f \) is real analytic on the open set \((-\infty , \frac{\delta }{2})\cup (\delta ,+\infty )\), we know from [52, Theorem 3.1] that \(F\) is analytic at \(A\). Therefore, since \(A=\Lambda (A)\), it is well-known (see e.g., [4, Theorem V.3.3]) that for \(H\) sufficiently close to zero,
and
where \(\Sigma \in \mathcal{S}^{n}\) is given by (15) . Let \(R(\cdot ):=\Pi _{\mathcal{S}_{+}^{n}}(\cdot )-F(\cdot )\). By the definition of \(f \), we know that \(F(A) =A_{+}:=\Pi _{\mathcal{S}_{+}^{n}}(A)\), which implies that \(R(A)=0\). Meanwhile, it is clear that the matrix valued function \(R\) is directionally differentiable at \(A\), and from (14), the directional derivative of \(R\) for any given direction \(H\in \mathcal{S}^{n}\), is given by
By the Lipschitz continuity of \(\lambda (\cdot )\), we know that for \(H\) sufficiently close to zero,
and
Therefore, by the definition of \(F \), we know that for \(H\) sufficiently close to zero,
Since \(P\) satisfies (57), we know that for any \(\mathcal{S}^{n}\ni H\rightarrow 0\), there exists an orthogonal matrix \(Q\in \mathcal{O}^{|\beta |}\) such that
which was stated in [51] and was essentially proved in the derivation of Lemma 4.12 in [50]. Therefore, by noting that \((\Lambda (Y)_{\beta \beta })_{+}=O(\Vert H\Vert )\), we obtain from (60), (61) and (62) that
By (57) and (62), we know that
Since \(Q\in \mathcal{O}^{|\beta |}\), we have
By noting that \(\Pi _{\mathcal{S}_{+}^{|\beta |}}(\cdot )\) is globally Lipschitz continuous and \(\Pi _{\mathcal{S}_{+}^{|\beta |}}(Q \Lambda (Y)_{\beta \beta }Q^{T})=Q( \Lambda (Y)_{\beta \beta })_{+}Q^{T}\), we obtain that
Therefore,
By combining (59) and (63), we know that for any \(\mathcal{S}^{n}\ni H\rightarrow 0\),
Next, consider the case that \(A=\overline{P}^{T}\Lambda (A)\overline{P}\). Re-write (57) as
Let \(\widetilde{H}:=\overline{P}^{T}H\overline{P}\). Then, we have
Therefore, since \(\overline{P}\in \mathcal{O}^{n}\), we know from (64) and (14) that for any \(\mathcal{S}^{n}\ni H\rightarrow 0\), (16) holds. \(\square \)
Proof of Proposition 3.3
Denote the set in the righthand side of (27) by \(\mathcal N\). We first show that \(N_{\mathrm{gph}\,N_{\mathcal{S}^{n}_+}} (0,0)\subseteq \mathcal{N}\). By the definition of the limiting normal cone in (8), we know that \((U^*,V^*)\in N_{\mathrm{gph}\,N_{\mathcal{S}^{|\beta |}_+}} (0,0)\) if and only if there exist two sequences \(\{ ({U^k}^*,{V^k}^*)\}\) converging to \((U^*,V^*)\) and \(\{(U^k,V^k)\}\) converging to \((0,0)\) with \(({U^k}^*,{V^k}^*)\in N_{\mathrm{gph}\,N_{\mathcal{S}^{n}_+}}^\pi (U^k,V^k)\) and \((U^k,V^k)\in \mathrm{gph}\,N_{\mathcal{S}^{n}_+}\) for each \(k\).
For each \(k\), denote \(A^{k}\!:=\!U^{k}\!+\!V^{k}\in \mathcal{S}^{n}\) and let \(A^{k}\!=\!P^{k}\Lambda (A^{k})(P^{k})^{T}\) with \(P^{k}\!\in \!\mathcal{O}^{n}\) be the eigenvalue decomposition of \(A^{k}\). Then for any \(i\!\in \!\{1,\ldots ,n\}\), we have
Since \(\{P^k\}_{k=1}^{\infty }\) is uniformly bounded, by taking a subsequence if necessary, we may assume that \(\{P^k\}_{k=1}^{\infty }\) converges to an orthogonal matrix \(Q := \displaystyle {\lim \nolimits _{k\rightarrow \infty }} P^k\in \mathcal{O}^{n}\). For each \(k\), we know that the vector \(\lambda (A^{k})\) is an element of \({\mathfrak {R}}^{n}_{\gtrsim }\). By taking a subsequence if necessary, we may assume that for each \(k\), \(\Lambda (A^{k})\) has the same form, i.e.,
where \(\beta _{+}\), \(\beta _{0}\) and \(\beta _{-}\) are the three index sets defined by
Since \(({U^k}^*,{V^k}^*)\in N_{\mathrm{gph}\,N_{\mathcal{S}^{n}_+}}^\pi (U^k,V^k) \), we know from Proposition 3.2 that for each \(k\), there exist
and
such that
where \(\widetilde{{U^{k}}}^*=(P^{k})^{T}{U^{k}}^*P^{k}\), \(\widetilde{{V^{k}}}^*=(P^{k})^{T}{V^{k}}^*P^{k}\) and
Since for each \(k\), each element of \({\Sigma }^{k}_{\beta _{+}\beta _{-}}\) belongs to the interval \([0,1]\), by further taking a subsequence if necessary, we may assume that the limit of \(\{ {\Sigma }^{k}_{\beta _{+}\beta _{-}} \}_{k=1}^{\infty }\) exists. Therefore, by the definition of \(\mathcal{U}_{n}\) in (24), we know that
where \(\Xi _{1}\) and \(\Xi _{2}\) are given by (26). Therefore, we obtain from (65) that \((U^{*},V^{*})\in \mathcal N.\)
The other direction, i.e., \(N_{\mathrm{gph}\,N_{\mathcal{S}^{n}_+}} (0,0)\supseteq \mathcal{N}\) can be proved in a similar but simpler way to that of the second part of Theorem 3.1. We omit it here. \(\square \)
Proof of Theorem 3.1
“\(\Longrightarrow \)” Suppose that \((X^*,Y^*)\in N_{\mathrm{gph}\,N_{\mathcal{S}^n_+}} (X,Y)\). By the definition of the limiting normal cone in (8), we know that \((X^*,Y^*)=\displaystyle {\lim \nolimits _{k \rightarrow \infty }} ({X^k}^*,{Y^k}^*) \) with
where \((X^k,Y^k) \rightarrow (X,Y)\) and \((X^k,Y^k)\in \mathrm{gph}\,N_{\mathcal{S}^n_+}\). For each \(k\), denote \(A^{k}:=X^{k}+Y^{k}\) and let \(A^{k}=P^{k}\Lambda (A^{k})(P^{k})^{T}\) be the eigenvalue decomposition of \(A^{k}\). Since \( \Lambda (A) = \displaystyle {\lim \nolimits _{k\rightarrow \infty }} \Lambda (A^{k}) \), we know that \(\Lambda (A^{k})_{\alpha \alpha }\succ 0\), \(\Lambda (A^{k})_{\gamma \gamma }\prec 0\) for k sufficiently large and \(\displaystyle {\lim \nolimits _{k \rightarrow \infty }} \displaystyle {\Lambda (A^{k})_{\beta \beta }} = 0\).
Since \(\{P^k\}_{k=1}^{\infty }\) is uniformly bounded, by taking a subsequence if necessary, we may assume that \(\{P^k\}_{k=1}^{\infty }\) converges to an orthogonal matrix \(\widehat{P}\in \mathcal{O}^{n}(A)\). We can write \(\widehat{P}=\left[ \overline{P}_{\alpha } \ \ \overline{P}_{\beta }Q \ \ \overline{P}_{\gamma }\right] \), where \(Q\in \mathcal {O}^{|\beta |}\) can be any \(|\beta |\times |\beta |\) orthogonal matrix. By further taking a subsequence if necessary, we may also assume that there exists a partition \(\pi (\beta )=(\beta _{+}, \beta _{0}, \beta _{-})\) of \(\beta \) such that for each \(k\),
This implies that for each \(k\),
Then, for each \(k\), since \(({X^k}^*,{Y^k}^*)\in N_{\mathrm{gph}\,N_{\mathcal{S}^n_+}}^\pi (X^k,Y^k)\), we know from Proposition 3.2 that there exist
and
such that
where \(\widetilde{{X^{k}}}^*=(P^{k})^{T}{X^{k}}^*P^{k}, \widetilde{{Y^{k}}}^*=(P^{k})^{T}{Y^{k}}^*P^{k}\) and
By taking limits as \(k\rightarrow \infty \), we obtain that
and
By simple calculations, we obtain from (68) that
This, together with the definition of \(\mathcal{U}_{|\beta |}\), shows that there exist \(\Xi _{1}\in \mathcal{U}_{|\beta |}\) and the corresponding \(\Xi _{2}\) such that
and
where \(\Theta _{1}\) and \(\Theta _{2}\) are given by (22). Meanwhile, since \(Q\in \mathcal{O}^{|\beta |}\), by taking limits in (67) as \(k\rightarrow \infty \), we obtain that
and
Hence, by Proposition 3.3, we conclude that \((\widetilde{X}_{\beta \beta }^{*}, \widetilde{Y}_{\beta \beta }^{*})\in N_{\mathrm{gph}\,N_{\mathcal{S}^{|\beta |}_+}} (0,0)\). From (69), it is easy to check that \((X^{*},Y^{*})\) satisfies the conditions (28) and (29).
“\(\Longleftarrow \)” Let \((X^{*},Y^{*})\) satisfies (28) and (29). We shall show that there exist two sequences \(\{(X^k,Y^k)\}\) converging to \((X,Y)\) and \(\{({X^k}^*,{Y^k}^*)\}\) converging to \((X^*,Y^*)\) with \( (X^k,Y^k)\in \mathrm{gph}\,N_{\mathcal{S}^n_+}\) and \(({X^k}^*,{Y^k}^*)\in N_{\mathrm{gph}\,N_{\mathcal{S}^n_+}}^\pi (X^k,Y^k)\) for each \(k\).
Since \((\widetilde{X}_{\beta \beta }^{*}, \widetilde{Y}_{\beta \beta }^{*})\in N_{\mathrm{gph}\,N_{\mathcal{S}^{|\beta |}_+}} (0,0)\), by Proposition 3.3, we know that there exist an orthogonal matrix \(Q\in \mathcal{O}^{|\beta |}\) and \(\Xi _1\in \mathcal{U}_{|\beta |}\) such that
Since \(\Xi _1\in \mathcal{U}_{|\beta |}\), we know that there exists a sequence \(\{z^{k}\}\in {\mathfrak {R}}^{|\beta |}_{\gtrsim }\) converging to \(0\) such that \(\Xi _1=\displaystyle {\lim \nolimits _{k\rightarrow \infty }}D(z^k)\). Without loss of generality, we can assume that there exists a partition \(\pi (\beta )=(\beta _{+},\beta _{0},\beta _{-})\in \fancyscript{P}(\beta )\) such that for all \(k\),
For each \(k\), let
where \(\widehat{P}=\left[ \overline{P}_{\alpha } \, \, \overline{P}_{\beta }Q \, \, \overline{P}_{\gamma }\right] \in \mathcal{O}^{n}(A)\). Then, it is clear that \(\{(X^k,Y^k)\}\in \mathrm{gph}\,N_{\mathcal{S}^n_+}\) converging to \((X,Y)\). For each \(k\), denote
and
where
Next, for each \(k\), we define two matrices \(\widehat{{X^{k}}}^*, \widehat{{Y^{k}}}^*\in \mathcal{S}^{n}\). Let \(i,j\in \{1,\ldots ,n\}\). If \((i,j)\) and \((j,i)\notin (\alpha \times \beta _{-}) \cup (\beta _{+}\times \gamma )\cup (\beta \times \beta )\). We define
Otherwise, denote \(c^{k}:=(\Sigma ^{k})_{i,j}\), \(k=1,2,\ldots \). We consider the following four cases.
Case 1 \((i,j)\) or \((j,i)\in \alpha \times \beta _{-}\). In this case, we know from (28) that \({\widetilde{X}^*}_{i,j}=0\). Since \(c_{k}\ne 0\) for all \(k\) and \(c^{k}\rightarrow 1\) as \(k\rightarrow \infty \), we define
Then, we have
Case 2 \((i,j)\) or \((j,i)\in \beta _{+}\times \gamma \). In this case, we know from (28) that \({\widetilde{Y}^*}_{i,j}=0\). Since \(c_{k}\ne 1\) for all \(k\) and \(c^{k}\rightarrow 0\) as \(k\rightarrow \infty \), we define
Then, we know that
Case 3 \((i,j)\) or \((j,i)\in (\beta \times \beta ){\setminus } (\beta _{+}\times \beta _{-}) \). In this case, we define
Case 4 \((i,j)\) or \((j,i)\in \beta _{+}\times \beta _{-}\). Since \(c\in [0,1]\), we consider the following two sub-cases:
Case 4.1 \(c\ne 1\). Since \(c_{k}\ne 1\) for all \(k\) large enough, we define
Then, from (70), we know that
Case 4.2 \(c=1\). Since \(c_{k}\ne 0\) for all \(k\) large enough, we define
Then, again from (70), we know that
For each \(k\), define \({X^k}^*=\widehat{P} \widehat{{X^{k}}}^*\widehat{P}^{T}\) and \({Y^k}^*=\widehat{P} \widehat{{Y^{k}}}^*\widehat{P}^{T}\). Then, from (71)–(76) we obtain that
and
Moreover, from (74) and (70), we have
From Proposition 3.2 and (77), we know that
Hence, the assertion of the theorem follows.
Rights and permissions
About this article
Cite this article
Ding, C., Sun, D. & Ye, J.J. First order optimality conditions for mathematical programs with semidefinite cone complementarity constraints. Math. Program. 147, 539–579 (2014). https://doi.org/10.1007/s10107-013-0735-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-013-0735-z
Keywords
- Mathematical program with semidefinite cone complementarity constraints
- Necessary optimality conditions
- Constraint qualifications
- S-stationary conditions
- M-stationary conditions
- C-stationary conditions