Skip to main content
Log in

Sum-of-squares rank upper bounds for matching problems

  • Published:
Journal of Combinatorial Optimization Aims and scope Submit manuscript

Abstract

The matching problem is one of the most studied combinatorial optimization problems in the context of extended formulations and convex relaxations. In this paper we provide upper bounds for the rank of the sum-of-squares/Lasserre hierarchy for a family of matching problems. In particular, we show that when the problem formulation is strengthened by incorporating the objective function in the constraints, the hierarchy requires at most \(\left\lceil \frac{k}{2} \right\rceil \) levels to refute the existence of a perfect matching in an odd clique of size \(2k+1\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. An r-uniform hypergraph is given by a set of vertices V and set of hyperedges E where each hyperedge \(e \in E\) is incident to exactly r vertices.

  2. The appendix of Barak et al. (2014) is not available in the version published in the conference proceedings. The statement and proof can be found in Corollary A.3 in the arxiv version at https://arxiv.org/pdf/1312.6652.pdf.

References

  • Au Y, Tunçel L (2011) Complexity analyses of Bienstock–Zuckerberg and Lasserre relaxations on the matching and stable set polytopes. In: IPCO, pp 14–26

  • Au Y, Tunçel L (2013) A comprehensive analysis of polyhedral lift-and-project methods. CoRR. arXiv:1312.5972

  • Avis D, Bremner D, Tiwary HR, Watanabe O (2014) Polynomial size linear programs for non-bipartite matching problems and other problems in P. CoRR abs/1408.0807. arXiv:1408.0807

  • Barak B, Kelner JA, Steurer D (2014) Rounding sum-of-squares relaxations. In: STOC, pp 31–40. doi:10.1145/2591796.2591886

  • Braun G, Brown-Cohen J, Huq A, Pokutta S, Raghavendra P, Roy A, Weitz B, Zink D (2016) The matching problem has no small symmetric SDP. In: SODA, pp 1067–1078. doi:10.1137/1.9781611974331.ch75

  • Chan SO, Lee JR, Raghavendra P, Steurer D (2013) Approximate constraint satisfaction requires large LP relaxations. In: FOCS. IEEE Computer Society, pp 350–359. doi:10.1109/FOCS.2013.45

  • Edmonds J (1965) Paths, trees and flowers. Can J Math 17:449–467

    Article  MathSciNet  MATH  Google Scholar 

  • Fawzi H, Saunderson J, Parrilo PA (2016) Sparse sums of squares on finite Abelian groups and improved semidefinite lifts. Math Program. doi:10.1007/s10107-015-0977-z

    MathSciNet  MATH  Google Scholar 

  • Goemans MX, Tunçel L (2001) When does the positive semidefiniteness constraint help in lifting procedures? Math Oper Res 26(4):796–815. doi:10.1287/moor.26.4.796.10012

    Article  MathSciNet  MATH  Google Scholar 

  • Grigoriev D (2001) Linear lower bound on degrees of Positivstellensatz calculus proofs for the parity. Theor Comput Sci 259(1–2):613–622

    Article  MathSciNet  MATH  Google Scholar 

  • Horn RA, Johnson CR (2013) Matrix analysis. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Karlin AR, Mathieu C, Nguyen CT (2011) Integrality gaps of linear and semi-definite programming relaxations for Knapsack. In: IPCO, pp 301–314

  • Kurpisz A, Leppänen S, Mastrolilli M (2015) On the hardest problem formulations for the 0/1 Lasserre hierarchy. In: Automata, languages, and programming—42nd international colloquium, ICALP 2015. Proceedings, part I, pp 872–885

  • Lasserre JB (2001) Global optimization with polynomials and the problem of moments. SIAM J Optim 11(3):796–817

    Article  MathSciNet  MATH  Google Scholar 

  • Laurent M (2003) A comparison of the Sherali–Adams, Lovász–Schrijver, and Lasserre relaxations for 0–1 programming. Math Oper Res 28(3):470–496

    Article  MathSciNet  MATH  Google Scholar 

  • Lee JR, Raghavendra P, Steurer D (2015) Lower bounds on the size of semidefinite programming relaxations. In: Servedio RA, Rubinfeld R (eds) STOC. ACM, pp 567–576. doi:10.1145/2746539.2746599

  • Lovász L, Schrijver A (1991) Cones of matrices and set-functions and 0–1 optimization. SIAM J Optim 1(12):166–190

    Article  MathSciNet  MATH  Google Scholar 

  • Mathieu C, Sinclair A (2009) Sherali–Adams relaxations of the matching polytope. In: STOC, pp 293–302. doi:10.1145/1536414.1536456

  • Nesterov Y (2000) Global quadratic optimization via conic relaxation. Kluwer Academic Publishers, Dordrecht, pp 363–384

    Google Scholar 

  • O’Donnell R, Zhou Y (2013) Approximability and proof complexity. In: SODA, pp 1537–1556

  • Parrilo P (2000) Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization. Ph.D. Thesis, California Institute of Technology

  • Rothvoß T (2013) The Lasserre hierarchy in approximation algorithms. In: Lecture Notes for the MAPSP 2013—Tutorial

  • Rothvoß T (2014) The matching polytope has exponential extension complexity. In: STOC, pp 263–272. doi:10.1145/2591796.2591834

  • Sherali HD, Adams WP (1990) A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems. SIAM J Discrete Math 3(3):411–430

    Article  MathSciNet  MATH  Google Scholar 

  • Shor N (1987) Class of global minimum bounds of polynomial functions. Cybernetics 23(6):731–734

    Article  MATH  Google Scholar 

  • Stephen T, Tunçel L (1999) On a representation of the matching polytope via semidefinite liftings. Math Oper Res 24(1):1–7

    Article  MathSciNet  MATH  Google Scholar 

  • Worah P (2015) Rank bounds for a hierarchy of Lovász and Schrijver. J Comb Optim 30(3):689–709. doi:10.1007/s10878-013-9662-4

    Article  MathSciNet  MATH  Google Scholar 

  • Yannakakis M (1991) Expressing combinatorial optimization problems by linear programs. J Comput Syst Sci 43(3):441–466. doi:10.1016/0022-0000(91)90024-Y

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adam Kurpisz.

Additional information

Supported by the Swiss National Science Foundation Project 200020-144491/1 “Approximation Algorithms for Machine Scheduling Through Theory and Experiments”.

Appendix: Proof of Lemma 8

Appendix: Proof of Lemma 8

As it is presented in e.g. Laurent (2003) the 0/1 Lasserre hierarchy of level t can be formulated in terms of matrices. For \(0 \le t \le n\), let \(\left( {\begin{array}{c}[n]\\ t\end{array}}\right) \), \(\left( {\begin{array}{c}[n]\\ \le t\end{array}}\right) \) denote all subsets of size exactly and at most t, respectively. The moment matrix of variables is symmetric, real-valued, set-indexed matrix \(M_{t+1} \in \mathbb {S}^{\left( {\begin{array}{c}[n]\\ \le r\end{array}}\right) \times \left( {\begin{array}{c}[n]\\ \le r\end{array}}\right) }\) such that \(\left( M_{t+1}\right) _{I,J}=\tilde{\mathbb {E}}{\left( \prod _{i \in I \cup J}x_i \right) }\).

Lemma 9

\( \tilde{\mathbb {E}}{\left( u^2(x) \right) } \ge 0, ~ \forall u \in \mathbb {R}[x]_{t+1}\) iff \(M_{t+1} \succeq 0\)

Proof

$$\begin{aligned} \tilde{\mathbb {E}}{\left( u^2(x) \right) }=\tilde{\mathbb {E}}{\left( \sum _{|I|\le t+1} a_I \prod _{i\in I}x_i \right) }^2 = \sum _{|I|,|J|\le t+1} a_I a_J \tilde{\mathbb {E}}{\left( \prod _{i \in I \cup J}x_i \right) }= a^\top M_{t+1} a \end{aligned}$$

where \(a \in \mathbb {R}^{\left( {\begin{array}{c}[n]\\ \le t\end{array}}\right) }\). The claim follows since \(a^\top M_{t+1} a \ge 0, ~ \forall a \in \mathbb {R}^{\left( {\begin{array}{c}[n]\\ \le t\end{array}}\right) }\) iff \(M_{t+1} \succeq 0\).\(\square \)

We consider a complete graph \(K_5\), with \(n=10\) edges, and \(t=1\). For a sake of convenience we denote the moment matrix \(M_{2}\) by M and, since the considered pseudoexpectation operator is symmetric, for \(\tilde{\mathbb {E}}{\left( \prod _{i \in S}x_i \right) } \ne 0\) we denote \(\tilde{\mathbb {E}}{\left( \prod _{i \in S}x_i \right) }\) by \(\alpha _1\) or \(\alpha _2\) for \(|S|=1\) or \(|S|=2\), respectively. Let M be of the form:

$$\begin{aligned} M=\begin{pmatrix}1 &{} y^{\top } &{}z^{\top } \\ y &{} A&{} B \\ z &{} B^{\top } &{} C\end{pmatrix} \end{aligned}$$

where \(y\in \mathbb {R}^{\left( {\begin{array}{c}[n]\\ 1\end{array}}\right) }\), \(z\in \mathbb {R}^{\left( {\begin{array}{c}[n]\\ 2\end{array}}\right) }\), \(A\in \mathbb {R}^{\left( {\begin{array}{c}[n]\\ 1\end{array}}\right) \times \left( {\begin{array}{c}[n]\\ 1\end{array}}\right) }\), \(B\in \mathbb {R}^{\left( {\begin{array}{c}[n]\\ 1\end{array}}\right) \times \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) }\) and \(C\in \mathbb {R}^{\left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \times \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) }\).

In the following we cite a Lemma from Kurpisz et al. (2015) which gives necessary and sufficient conditions for a diagonal matrix plus rank one matrix to be PSD.

Lemma 10

(Kurpisz et al. 2015) For a diagonal matrix \(D\in \mathbb {R}^{m \times m}\) with diagonal entries \(d_1,\ldots , d_{m}\), the matrix \(D + d_{m+1} \mathbf 1 \mathbf 1 ^\top \) is positive-semidefinite if and only if either \(d_i\ge 0\) for all \(i \in [m+1]\), or the following holds

$$\begin{aligned} d_j&< 0, \quad \textit{for exactly one } j \in [m+1],\end{aligned}$$
(6)
$$\begin{aligned} d_k&> 0, \quad \textit{for all } k \ne j \in [m+1], \end{aligned}$$
(7)
$$\begin{aligned} \sum _{i \in [m+1]} \frac{1}{d_i}&\le 0 \end{aligned}$$
(8)

We prove the Lemma 8 by contradiction. Assume that there exists a pseudoexpectation operator \(\tilde{\mathbb {E}}{\left( \cdot \right) }\) satisfying (2) for maximum matching problem  (3) with \(b=1\) in \(K_5\) at level \(t=1\) with an objective value greater than 2. Since by assumption there exists such \(\tilde{\mathbb {E}}{\left( \cdot \right) }\), the corresponding moment matrix M is PSD.

Lemma 11

For every feasible symmetric pseudoexpectation operator \(\tilde{\mathbb {E}}{\left( \cdot \right) }\), \(\alpha _2 \le 1/15\).

Proof

For every feasible pseudoexpectation operator, the corresponding moment matrix M must be PSD, and (by Observation 7.1.12 in  Horn and Johnson (2013)) every principal submatrix of M must be PSD. Consider the following principal submatrix \(P_1\)

$$\begin{aligned} P_1=\begin{pmatrix}1 &{}{z'}^{\top } \\ {z'} &{} C'\end{pmatrix} \end{aligned}$$

where \(z'\) and \(C'\) are composed of the nonzero rows/columns of z and C, respectively (clearly, by Lemma 5 which holds also for relaxation (3), there are some zero rows). Since, by Lemma 6, maximum matching in \(K_5\) is of size 2 and by Lemma 5, the matrix \(C'\) is a diagonal matrix. By the Schur complement theorem,

$$\begin{aligned} P_1 \succeq 0 \Leftrightarrow C' - z' {z'}^{\top } \succeq 0 \end{aligned}$$

Now since we consider symmetric pseudoexpectation operator, \(z=\alpha _2 \mathbf 1 \). We have

$$\begin{aligned} C' - z' {z'}^{\top } = C'-\alpha _2^2\mathbf 1 \mathbf 1 ^\top \end{aligned}$$

where \(C'\) is a diagonal matrix with diagonal entries \(\alpha _2\). Finally, since there are 15 subsets of cardinality 2 corresponding to matchings of size 2, that produce 15 nonzero row/columns in C by Lemma 10 get that:

$$\begin{aligned} \frac{15}{\alpha _2}\le \frac{1}{\alpha _2^2} \end{aligned}$$

thus, finally \(\alpha _2\le 1/15\). \(\square \)

Lemma 12

For every feasible symmetric pseudoexpectation operator \(\tilde{\mathbb {E}}{\left( \cdot \right) }\), \(\alpha _1\le 1/5\).

Proof

Consider the following principal submatrix \(P_2\) of M

$$\begin{aligned} P_2=\begin{pmatrix}1 &{}{y}^{\top } \\ {y} &{} A\end{pmatrix} \end{aligned}$$

For every feasible pseudoexpectation operator the matrix \(P_2\) has to be PSD. By the Schur complement Theorem a matrix

$$\begin{aligned} P_2 \succeq 0 \Leftrightarrow A - y {y}^{\top } \succeq 0 \end{aligned}$$

It remains to show that PSDness of a matrix \(A - y {y}^{\top }\) implies that \(\alpha _1\le 1/5\). We use the fact that \(A - y {y}^{\top }\) belongs to Bose–Mesner algebra of the Johnson Scheme J(5, 2). Indeed, A belongs to J(5, 2) so does \(A - y {y}^{\top }\). This implies that the eigenvalues of \(A - y {y}^{\top }\) can be explicitly computed using Eberlein polynomials.

More precisely Johnson scheme J(vk) is defined as a sequence of 0/1 matrices \(A_0,\ldots , A_k\) row-column indexed by the k subsets of [v] such that \(\left( A_i \right) _{I,J}=1\) iff \(|I \triangle J|=2i\) . Thus \(A_0=I\) and \(\sum _{i=0}^k A_i=J\). A fundamental property of the Johnson scheme is that the distinct eigenvalues of a matrix \(Z=\sum _{i=0}^k a_i A_i\) can be expressed by \(\lambda _u=\sum _{i=0}^k a_i Q_i(u)\), for \(u\in \{0,\ldots ,k\}\) where \(Q_i(u)\) is the Eberlein polynomial of the form:

$$\begin{aligned} Q_l(u)=\sum _{j=0}^l (-1)^j \left( {\begin{array}{c}u\\ j\end{array}}\right) \left( {\begin{array}{c}k-u\\ l-j\end{array}}\right) \left( {\begin{array}{c}v-k-u\\ l-j\end{array}}\right) \end{aligned}$$

It is easy to see that the matrix A is a Johnson scheme J(5, 2) which can be express as

$$\begin{aligned} A=\alpha _1 A_0 + \alpha _2 A_2 \end{aligned}$$

Thus \(A- y {y}^{\top }\) is a Johnson scheme J(5, 2) which can be express as

$$\begin{aligned} A- y {y}^{\top }=(\alpha _1-\alpha _1^2) A_0 - \alpha _1^2 A_1 + (\alpha _2-\alpha _1^2) A_2 \end{aligned}$$

and distinct eigenvalues of \(A- y {y}^{\top }\) can be expressed by

$$\begin{aligned} \lambda _u=(\alpha _1-\alpha _1^2) Q_0(u) -\alpha _1^2 Q_1(u) +(\alpha _2-\alpha _1^2) Q_2(u) \quad u=\{0,1,2\} \end{aligned}$$

First note that:

$$\begin{aligned} Q_0(u)= & {} 1\\ Q_1(u)= & {} 6 + (u-6 ) u\\ Q_2(u)= & {} \frac{1}{4} \left( u(u-6) (u-3)^2+12\right) \end{aligned}$$

and thus we get the following distinct values of \(\lambda _u\) for \(u=\{0,1,2\}\)

$$\begin{aligned} \lambda _0= & {} -10\alpha _1^2 +\alpha _1 +3 \alpha _2\\ \lambda _1= & {} \alpha _1-2 \alpha _2\\ \lambda _2= & {} \alpha _1+\alpha _2 \end{aligned}$$

Since the matrix \(A - y {y}^{\top }\) is claimed to be PSD, in particular \(\lambda _0 \ge 0\). Note that \(\lambda _0\) is increasing in \(\alpha _2\) thus, by Lemma 11 we consider \(\alpha _2=1/15\), we get

$$\begin{aligned} \lambda _0=-10\alpha _1^2+\alpha _1+\frac{1}{5}\ge 0 \end{aligned}$$

which implies that \(\alpha _1 \le 1/5\) \(\square \)

By Lemma 12 we get that \(\alpha _1\le 1/5\), this is clearly a contradiction since the assumed pseudoexpectation operator had an objective value greater than 2.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kurpisz, A., Leppänen, S. & Mastrolilli, M. Sum-of-squares rank upper bounds for matching problems. J Comb Optim 36, 831–844 (2018). https://doi.org/10.1007/s10878-017-0169-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10878-017-0169-2

Keywords

Mathematics Subject Classification

Navigation