Skip to main content
Log in

Computing the distance between the linear matrix pencil and the completely positive cone

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper, we consider the problem of computing the distance between the linear matrix pencil and the completely positive cone. We formulate it as a linear optimization problem with the cone of moments and the second order cone. A semidefinite relaxation algorithm is presented and the convergence is studied. We also propose a new model for checking the membership in the completely positive cone.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. MPS-SIAM Series on Optimization. SIAM, Philadelphia (2001)

    Book  MATH  Google Scholar 

  2. Berman, A., Shaked-Monderer, N.: Completely Positive Matrices. World Scientific, Singapore (2003)

    Book  MATH  Google Scholar 

  3. Bomze, I.M., de Klerk, E.: Solving standard quadratic optimization problems via linear, semidefinite and copositive programming. J. Global Optim. 24, 163–185 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bomze, I.M.: Copositive optimization-recent developments and applications. Eur. J. Oper. Res. 216, 509–520 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  5. Burer, S.: On the copositive representation of binary and continuous nonconvex quadratic programs. Math. Program. Ser. A 120, 479–495 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Curto, R., Fialkow, L.: Truncated K-moment problems in several variables. J. Oper. Theory 54, 189–226 (2005)

    MathSciNet  MATH  Google Scholar 

  7. de Klerk, E., Pasechnik, D.V.: Approximation of the stability number of a graph via copositive programming. SIAM J. Optim. 12, 875–892 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  8. Dickinson, P.J.: The copositive cone, the completely positive cone and their generalisations. PhD thesis, Aniversity of Groningen, Groningen, The Netherlands (2013)

  9. Dickinson, P.J., Gijben, L.: On the computational complexity of membership problems for the completely positive cone and its dual. Comput. Optim. Appl. 57, 403–415 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  10. Dür, M.: Copositive programming—a survey. In: Diehl, M., Glineur, F., Jarlebring, E., Michiels, W. (eds.) Recent Advances in Optimization and Its Applications in Engineering, pp. 3–20. Springer, Berlin (2010)

    Chapter  Google Scholar 

  11. Gvozdenović, N., Laurent, M.: Semidefinite bounds for the stability number of a graph via sums of squares of polynomials. Math. Program. Ser. B 110, 145–173 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  12. Fialkow, L., Nie, J.: The truncated moment problem via homogenization and flat extensions. J. Funct. Anal. 263, 1682–1700 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Helton, J.W., Nie, J.: A semidefinite approach for truncated K-moment problems. Found. Comput. Math. 12, 851–881 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. Henrion, D., Lasserre, J.: Detecting Global Optimality and Extracting Solutions in GloptiPoly, Positive Polynomials in Control. Lecture Notes in Control and Information Science, pp. 293–310. Springer, Berlin (2005)

    Google Scholar 

  15. Henrion, D., Lasserre, J., Loefberg, J.: GloptiPoly 3: moments, optimization and semidefinite programming. Optim. Methods Softw. 24, 761–779 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  16. Lasserre, J.B.: Moments, Positive Polynomials and Their Applications. Imperial College Press, London (2009)

    Book  Google Scholar 

  17. Lasserre, J.B.: New approximations for the cone of copositive matrices and its dual. Math. Program. Ser. A 144, 265–276 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  18. Laurent, M.: Sums of Squares, Moment Matrices and Optimization Over Polynomials, Emerging Applications of Algebraic Geometry. IMA Volumes in Mathematics and Its Applications, vol. 149, pp. 157–270. Springer, New York (2009)

    Google Scholar 

  19. Murty, K.G., Kabadi, S.N.: Some NP-complete problems in quadratic and nonlinear programming. Math. Program. 39, 117–129 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  20. Nie, J., Schweighofer, M.: On the complexity of Putinar’s Positivstellensatz. J. Complex. 23, 135–150 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  21. Nie, J.: The \(A\)-truncated K-moment problem. Found. Comput. Math. 14, 1243–1276 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  22. Nie, J.: Linear optimization with cones of moments and nonnegative polynomials. Math. Program. Ser. B 153, 247–274 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  23. Nie, J.: Optimality conditions and finite convergence of Lasserre’s hierarchy. Math. Program. Ser. A 146, 97–121 (2014)

    Article  MATH  Google Scholar 

  24. Nie, J., Ranestad, K.: Algebraic degree of polynomial optimization. SIAM J. Optim. 20, 485–502 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Papachristodoulou, A., Anderson, J., Valmorbida, G., Prajna, S., Seiler, P., Parrilo, P.A.: SOSTOOLS: sum of squares optimization toolbox for MATLAB (2013). Available from http://www.eng.ox.ac.uk/control/sostools

  26. Parrilo, P. A.: Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization. Ph.D. Dissertation, California Institute of Technology (2000)

  27. Peña, J., Vera, J., Zuluaga, L.: Computing the stability number of a graph via linear and semidenite programming. SIAM J. Optim. 18, 87–105 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  28. Putinar, M.: Positive polynomials on compact semi-algebraic sets. Ind. Aniv. Math. J. 42, 969–984 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  29. Putinar, M., Vasilescu, F.-H.: Positive polynomials on semialgebraic sets. C. R. Acad. Sci. Ser. I 328, 585–589 (1999)

    MathSciNet  Google Scholar 

  30. Shapiro, A., Scheinberg, K.: Duality and optimality conditions. In: Wolkowicz, H., Saigal, R., Vandenberghe, L. (eds.) Handbook of Semidefinite Programming, vol. 27, pp. 67–110. Springer, New York (2000)

    Chapter  Google Scholar 

  31. Sturm, J.F.: SeDuMi 1.02: a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 11 & 12, 625–653 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  32. Zhou, A., Fan, J.: Interiors of completely positive cones. J. Global Optim. 63, 653–675 (2015)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The first author is partially supported by NSFC 11171217 and 11571234.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinyan Fan.

Appendix: Proof of Theorem 3.2

Appendix: Proof of Theorem 3.2

Proof

(i) Let \((p^0,z^0)\) be a relative interior point of (D). Then

$$\begin{aligned} c \circ z^0 +p^0=0, \quad \mathrm {vech}(A_i)\bullet (c \circ z^0) = 0, \ i=1,\ldots ,m, \end{aligned}$$
(6.1)

\(\Vert z^0\Vert <1\), and \(p^0(x) > 0\) on \(\Delta \) due to [22, Lemma 3.1]. Since \(\Delta \) is compact, there exist \(\epsilon _0 > 0\) and \(\delta > 0\) such that

$$\begin{aligned} p(x)-\epsilon _0 > \epsilon _0, \; \forall p\in B(p^0,\delta ). \end{aligned}$$

By Nie [20, Theorem 6], there exists \(k_0 > 0\) such that

$$\begin{aligned} p(x)-\epsilon _0 \in I_{2k_0}(h)+Q_{k_0} (g),\; \forall p\in B(p^0,\delta ). \end{aligned}$$

Thus, (\(D^k\)) has a relative interior point for all \(k \ge k_0\). So, the strong duality holds between (\(P^k\)) and (\(D^k\)). Since (P) is feasible, the relaxation (\(P^k\)) is also feasible, and it has a minimizer \(({\mathbf {x}}^{*,k}, w^{*,k}, y^{*,k}, \gamma ^{*,k},{\tilde{\mathbf {x}}}^{*,k})\) (cf. [1, Theorem 2.4.I]).

(ii) Firstly, we prove that the sequence \(\{({\mathbf {x}}^{*,k}, w^{*,k}, y^{*,k}, \gamma ^{*,k})\}\) is bounded. Since (\(P^k\)) is feasible and \((w^{*,k}, \gamma ^{*,k})\in {\mathcal {L}}_{{\bar{n}}+1}\), we have that \(\{(w^{*,k}, \gamma ^{*,k})\}\) is bounded. Note that \(A_i (i=1, \ldots , m)\) are linearly independent and \(({\mathbf {x}}^{*,k}, w^{*,k}, y^{*,k})\) satisfies

$$\begin{aligned} c \circ \left( {\mathbf {x}}^{*,k} +\sum _{i=1}^m y^{*,k}_i \mathrm {vech}(A_i) \right) =w^{*,k} + c \circ \mathrm {vech}(A_0), \end{aligned}$$
(6.2)

we know that \(\{y^{*,k}\}\) is bounded if \(\{{\mathbf {x}}^{*,k}\}\) is bounded. Thus, it suffices to prove that \(\{{\mathbf {x}}^{*,k}\}\) is bounded.

Let \((p^0,z^0)\) and \(\epsilon _0\) be as in the proof of (i). Because \(I_{2k_0} (h) + Q_{k_0} (g)\) is dual to \(\Gamma _{k_0}\), for all \(k \ge k_0\), we have

$$\begin{aligned} 0 \le \langle p^0-\epsilon _0, {\tilde{\mathbf {x}}}^{*,k}\rangle = \langle p^0, {\tilde{\mathbf {x}}}^{*,k}\rangle - \epsilon _0 \langle 1, {\tilde{\mathbf {x}}}^{*,k}\rangle . \end{aligned}$$
(6.3)

It follows from \(\langle z^0, w^{*,k}\rangle +\langle 1, \gamma ^{*,k}\rangle \ge 0\), \(\gamma ^{*,k}\le \vartheta _P\), (6.1) and (6.2) that

$$\begin{aligned} \langle p^0, {\tilde{\mathbf {x}}}^{*,k}\rangle= & {} \langle -c \circ z^0, {\tilde{\mathbf {x}}}^{*,k}\rangle = \langle -c \circ z^0, {\mathbf {x}}^{*,k}\rangle =\langle z^0, -c \circ {\mathbf {x}}^{*,k}\rangle \nonumber \\= & {} \left\langle z^0, c \circ \sum \limits _{i=1}^m y_i^{*,k} \mathrm {vech}(A_i)-w^{*,k} -c \circ \mathrm {vech}(A_0) \right\rangle \nonumber \\= & {} \sum \limits _{i=1}^m y_i^{*,k} \mathrm {vech}(A_i)\bullet (c \circ z^0)- \langle z^0, w^{*,k}\rangle - \mathrm {vech}(A_0)\bullet (c \circ z^0)\nonumber \\= & {} - \langle z^0, w^{*,k}\rangle - \mathrm {vech}(A_0)\bullet (c \circ z^0)\nonumber \\\le & {} \gamma ^{*,k}- \mathrm {vech}(A_0)\bullet (c \circ z^0)\nonumber \\\le & {} \vartheta _P-\mathrm {vech}(A_0)\bullet (c \circ z^0)\nonumber \\:= & {} K_0. \end{aligned}$$
(6.4)

Denote by \({\mathbf {0}}\) the zero vector in \({\mathbb {N}}^n\). By (6.3), we obtain

$$\begin{aligned} 0 \le \langle p^0-\epsilon _0, {\tilde{\mathbf {x}}}^{*,k}\rangle \le K_0- \epsilon _0 ({\tilde{\mathbf {x}}}^{*,k})_{{\mathbf {0}}}, \end{aligned}$$

i.e.,

$$\begin{aligned} ({\tilde{\mathbf {x}}}^{*,k})_{{\mathbf {0}}}\le K_1:=K_0/\epsilon _0. \end{aligned}$$
(6.5)

For \(\Delta \) given in (2.10), since \(I (h) + Q(g)\) is archimedean, there exist \(\varrho >0\) and \(k_1 \in {\mathbb {N}}\) such that

$$\begin{aligned} \varrho -\Vert x\Vert _2^2 \in I_{2k_1} (h) + Q_{k_1}(g). \end{aligned}$$

So, for all \(k \ge k_1\), we have

$$\begin{aligned} 0 \le \langle \varrho -\Vert x\Vert _2^2, {\tilde{\mathbf {x}}}^{*,k}\rangle = \varrho ({\tilde{\mathbf {x}}}^{*,k})_{{\mathbf {0}}}- \sum _{|\alpha |=1} ({\tilde{\mathbf {x}}}^{*,k})_{2\alpha }, \end{aligned}$$

which, together with (6.5), gives

$$\begin{aligned} \sum _{|\alpha |=1} ({\tilde{\mathbf {x}}}^{*,k})_{2\alpha } \le \varrho K_1. \end{aligned}$$
(6.6)

Note that for each \(t = 1, \ldots , k -k_1\), we have

$$\begin{aligned} \Vert x\Vert _2^{2t-2}(\varrho -\Vert x\Vert _2^2)\in I_{2k} (h) + Q_{k}(g). \end{aligned}$$

The membership \({\tilde{\mathbf {x}}}^{*,k}\in \Gamma _{k}\) implies that

$$\begin{aligned} \varrho \langle \Vert x\Vert _2^{2t-2}, {\tilde{\mathbf {x}}}^{*,k}\rangle -\langle \Vert x\Vert _2^{2t}, {\tilde{\mathbf {x}}}^{*,k}\rangle \ge 0, \quad t = 1, \ldots , k -k_1. \end{aligned}$$
(6.7)

Combining (6.6) and (6.7), we obtain

$$\begin{aligned} \langle \Vert x\Vert _2^{2t}, {\tilde{\mathbf {x}}}^{*,k}\rangle \le \varrho ^t K_1, \quad t = 1, \ldots , k -k_1. \end{aligned}$$

Let \({\mathbf {x}}^k := {\tilde{\mathbf {x}}}^{*,k}|_{2k-2k_1}\). Then the moment matrix \(M_{k-k_1} ({\mathbf {x}}^k) \succeq 0\) and

$$\begin{aligned} \Vert {\mathbf {x}}^k\Vert _2 \le \Vert M_{k-k_1} ({\mathbf {x}}^k)\Vert _F \le \mathrm {trace} (M_{k-k_1} ({\mathbf {x}}^k)) =\sum _{i=0}^{k-k_1} \sum _{|\alpha |=i} ({\tilde{\mathbf {x}}}^{*,k})_{2\alpha }. \end{aligned}$$

Since

$$\begin{aligned} \sum _{|\alpha |=i} ({\tilde{\mathbf {x}}}^{*,k})_{2\alpha }=\left\langle \sum _{|\alpha |=i} x^{2\alpha }, {\mathbf {x}}^k\right\rangle \le \langle \Vert x\Vert _2^{2i},{\mathbf {x}}^k\rangle \le \varrho ^i K_1, \end{aligned}$$

we have

$$\begin{aligned} \Vert {\mathbf {x}}^k\Vert _2\le (1+\varrho +\cdots +\varrho ^{k-k_1})K_1. \end{aligned}$$
(6.8)

Fix \(k_2 > k_1\) such that \({\mathbf {x}}^{*,k}\) is a subvector of \({\mathbf {x}}^k|_{k_2-k_1}\). It follows from \({\mathbf {x}}^{*,k} = {\mathbf {x}}^k|_{{\mathcal {E}}}\) that

$$\begin{aligned} \Vert {\mathbf {x}}^{*,k}\Vert _2\le \Vert {\mathbf {x}}^{k_2}\Vert _2\le (1+\varrho +\cdots +\varrho ^{k_2-k_1})K_1. \end{aligned}$$

The above inequality shows that \(\{{\mathbf {x}}^{*,k}\}\) is bounded. Therefore, \(\{({\mathbf {x}}^{*,k}, w^{*,k}, y^{*,k}, \gamma ^{*,k})\}\) is bounded.

Secondly, we prove that every accumulation point of \(\{({\mathbf {x}}^{*,k}, w^{*,k}, y^{*,k}, \gamma ^{*,k})\}\) is a minimizer of (P). Without loss of generality, we assume \(({\mathbf {x}}^{*,k}, w^{*,k}, y^{*,k}, \gamma ^{*,k})\rightarrow ({\mathbf {x}}^{*}, w^{*}, y^{*}, \gamma ^{*})\) as \(k\rightarrow +\infty \). We first show that \({\mathbf {x}}^*\in {\mathcal {R}}\). Note that \(\Delta \) is compact. Up to a scaling, we can assume \(\Delta \subseteq B(0, \varrho )\) with \(\varrho < 1\). By (6.8), we have

$$\begin{aligned} \Vert {\mathbf {x}}^{k}\Vert _2\le K_1/(1-\varrho ), \end{aligned}$$

which implies that \(\{{\mathbf {x}}^{k}\}\) is bounded. By adding zero entries to the tailing, each tms \({\mathbf {x}}^k\) can be extended to a vector in \({\mathbb {R}}^{{\mathbb {N}}^n_{\infty }}\), which is a Hilbert space equipped with the inner product

$$\begin{aligned} \langle u, v\rangle =\sum _{\alpha \in {\mathbb {N}}^{n}} u_{\alpha } v_{\alpha }, \; \forall u, v \in {\mathbb {R}}^{{\mathbb {N}}^n_{\infty }}. \end{aligned}$$

So, \(\{{\mathbf {x}}^{k}\}\) is also bounded in \({\mathbb {R}}^{{\mathbb {N}}^n_{\infty }}\). By Alaoglu’s Theorem (cf. [16, Theorem C.18]), there exist a subsequence \(\{{\mathbf {x}}^{k_j}\}\) that is convergent in the weak-\(*\) topology, i.e., there exists \({\bar{\mathbf {x}}}^* \in {\mathbb {R}}^{{\mathbb {N}}^n_{\infty }}\) such that

$$\begin{aligned} \langle f, {\mathbf {x}}^{k_j}\rangle \rightarrow \langle f, {\bar{\mathbf {x}}}^*\rangle , \; \text {as}\; j\rightarrow \infty \end{aligned}$$

for all \(f\in {\mathbb {R}}^{{\mathbb {N}}^n_{\infty }}\). So, for each \(\alpha \in {\mathbb {N}}^n\),

$$\begin{aligned} ({\mathbf {x}}^{k_j})_{\alpha } \rightarrow ({\bar{\mathbf {x}}}^*)_{\alpha }. \end{aligned}$$
(6.9)

Since \({\mathbf {x}}^k|_{{\mathcal {E}}} = {\mathbf {x}}^{*,k} \rightarrow {\mathbf {x}}^{*}\), we have \({\bar{\mathbf {x}}}^{*}|_{{\mathcal {E}}} = {\mathbf {x}}^*\). By the feasibility of \({\tilde{\mathbf {x}}}^{*,k_j}\) and the definition of \({\mathbf {x}}^{k_j}\), it is easy to check that \({\bar{\mathbf {x}}}^{*}\in {\mathbb {R}}^{{\mathbb {N}}^n_{\infty }}\) is a full moment sequence whose localizing matrices of all orders are positive semidefinite. Thus, \({\bar{\mathbf {x}}}^*\) admits a \(\Delta \)-measure (cf. [28, Lemma 3.2]). This implies that \({\mathbf {x}}^*={\bar{\mathbf {x}}}^{*}|_{{\mathcal {E}}}\in {\mathcal {R}}\).

Since \(({\mathbf {x}}^{*,k}, w^{*,k}, y^{*,k}, \gamma ^{*,k})\) satisfies (6.2) and \((w^{*,k},\gamma ^{*,k}) \in {\mathcal {L}}_{{\bar{n}}+1}\), we know that \(({\mathbf {x}}^{*}, w^{*}, y^{*}, \gamma ^{*})\) satisfies

$$\begin{aligned} c \circ \left( {\mathbf {x}}^{*} +\sum _{i=1}^m y^{*}_i \mathrm {vech}(A_i)\right) -w^{*} =c \circ \mathrm {vech}(A_0), \quad (w^{*},\gamma ^{*}) \in {\mathcal {L}}_{{\bar{n}}+1}. \end{aligned}$$

In view of \({\mathbf {x}}^*\in {\mathcal {R}}\), we see that \(({\mathbf {x}}^{*}, w^{*}, y^{*}, \gamma ^{*})\) is feasible for (P) and \(\vartheta _P \le \gamma ^{*}\). Because (\(P^k\)) is a relaxation of (P) and \(({\mathbf {x}}^{*,k}, w^{*,k}, y^{*,k}, \gamma ^{*,k})\) is a minimizer of (\(P^k\)), it holds that

$$\begin{aligned} \vartheta _P \ge \gamma ^{*,k}, \quad k=2,3,\ldots . \end{aligned}$$

Hence, we get

$$\begin{aligned} \vartheta _P \ge \lim _{k\rightarrow \infty }\gamma ^{*,k}= \gamma ^{*}. \end{aligned}$$

Therefore, \(\vartheta _P= \gamma ^{*}\) and \(({\mathbf {x}}^{*}, w^{*}, y^{*}, \gamma ^{*})\) is a minimizer of (P). Since \(\{\gamma ^{*,k}\}\) is monotonically increasing, it converges to the minimum of (1.4). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fan, J., Zhou, A. Computing the distance between the linear matrix pencil and the completely positive cone. Comput Optim Appl 64, 647–670 (2016). https://doi.org/10.1007/s10589-016-9825-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-016-9825-1

Keywords

Mathematics Subject Classification

Navigation