Skip to main content

Advertisement

Log in

Eigenvalue, quadratic programming, and semidefinite programming relaxations for a cut minimization problem

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We consider the problem of partitioning the node set of a graph into k sets of given sizes in order to minimize the cut obtained using (removing) the kth set. If the resulting cut has value 0, then we have obtained a vertex separator. This problem is closely related to the graph partitioning problem. In fact, the model we use is the same as that for the graph partitioning problem except for a different quadratic objective function. We look at known and new bounds obtained from various relaxations for this NP-hard problem. This includes: the standard eigenvalue bound, projected eigenvalue bounds using both the adjacency matrix and the Laplacian, quadratic programming (QP) bounds based on recent successful QP bounds for the quadratic assignment problems, and semidefinite programming bounds. We include numerical tests for large and huge problems that illustrate the efficiency of the bounds in terms of strength and time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. A discussion of the relationship of \({{\mathrm{{cut}}}}(m)\) with the bandwidth of the graph is given in e.g., [8, 18, 22]. Particularly, for \(k=3\), if \({{\mathrm{{cut}}}}(m)>0\), then \(m_3 + 1\) is a lower bound for the bandwidth.

  2. Indeed, if Y is irreducible, the largest in magnitude eigenvalue is positive and a singleton and the corresponding eigenspace is the span of a positive vector. Hence the conclusion follows. For a reducible Y, due to symmetry of Y, it is similar via permutation to a block diagonal matrix whose blocks are irreducible matrices. Thus, we can apply the same argument to conclude similar results for the eigenspace corresponding to the largest magnitude eigenvalue.

  3. The doubly nonnegative programming relaxation is obtained by imposing the constraint \({{\widehat{V}}} Z {{\widehat{V}}}^T\ge 0\) onto \((\hbox {SDP}_{final})\). Like the SDP relaxation, the bound obtained from this approach is independent of d. In our implementation, we picked \(G = A\) for both the SDP and the DNN bounds.

  4. The SDP and DNN problems are solved via SDPT3 (version 4.0), [27], with tolerance gaptol set to be \(1e{-6}\) and \(1e{-3}\) respectively. The problems (4.4) and (4.8) are solved via SDPT3 (version 4.0) called by CVX (version 1.22), [11], using the default settings. The problem (6.1) is solved using simplex method in MATLAB, again using the default settings.

  5. In this case, the approximate optimal value of (4.8) returned by the SDP solver is in the order of \(10^{-5}\). We obtain a 1 for the QP lower bound since we always round up to the smallest integer exceeding it.

  6. Choosing a sparse V in the orthogonal matrix in (3.7) would speed up the calculation of the eigenvalues. Choosing a sparse V would be easier if V did not require orthonormal columns but just linearly independent columns, i.e., if we could arrange for a parametrization as in Lemma 3.6 without P orthogonal.

References

  1. Anstreicher, K.M., Brixius, N.W.: A new bound for the quadratic assignment problem based on convex quadratic programming. Math. Program. 89(3, Ser. A), 341–357 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  2. Anstreicher, K.M., Wolkowicz, H.: On Lagrangian relaxation of quadratic matrix constraints. SIAM J. Matrix Anal. Appl. 22(1), 41–55 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. Balas, E., Ceria, S., Cornuejols, G.: A lift-and-project cutting plane algorithm for mixed 0–1 programs. Math. Program. 58, 295–324 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  4. Borwein, J.M., Wolkowicz, H.: Facial reduction for a cone-convex programming problem. J. Aust. Math. Soc. A 30(3), 369–380 (1980/1981)

  5. Brixius, N.W., Anstreicher, K.M.: Solving quadratic assignment problems using convex quadratic programming relaxations. Optim. Methods Softw. 16(1–4), 49–68 (2001). Dedicated to Professor Laurence C. W. Dixon on the occasion of his 65th birthday

    Article  MathSciNet  MATH  Google Scholar 

  6. Brualdi, R.A., Ryser, H.J.: Combinatorial Matrix Theory. Cambridge University Press, New York (1991)

    Book  MATH  Google Scholar 

  7. Cheung, Y.-L., Schurr, S., Wolkowicz, H.: Preprocessing and regularization for degenerate semidefinite programs. In: Bailey, D.H., Bauschke, H.H., Borwein, P., Garvan, F., Thera, M., Vanderwerff, J., Wolkowicz, H. (eds.) Computational andAnalytical Mathematics. In Honor of Jonathan Borwein’s 60thBirthday. Springer Proceedings in Mathematics & Statistics, vol. 50, pp. 225–276. Springer, New York (2013)

    Google Scholar 

  8. de Klerk, E., Nagy, M.E., Sotirov, R.: On semidefinite programming bounds for graph bandwidth. Optim. Methods Softw. 28(3), 485–500 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  9. Demmel, J.W.: Applied Numerical Linear Algebra. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (1997)

    Book  MATH  Google Scholar 

  10. Falkner, J., Rendl, F., Wolkowicz, H.: A computational study of graph partitioning. Math. Program. 66(2, Ser. A), 211–239 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  11. Grant, M., Boyd, S., Ye, Y.: Disciplined convex programming. In: Global Optimization. Nonconvex Global Optimization, vol. 84, pp. 155–210. Springer, New York (2006)

  12. Hadley, S.W., Rendl, F., Wolkowicz, H.: A new lower bound via projection for the quadratic assignment problem. Math. Oper. Res. 17(3), 727–739 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hager, W.W., Hungerford, J.T.: A continuous quadratic programming formulation of the vertex separator problem. Report, University of Florida, Gainesville (2013)

  14. Hoffman, A.J., Wielandt, H.W.: The variation of the spectrum of a normal matrix. Duke Math. 20, 37–39 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  15. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1990). Corrected reprint of the 1985 original

  16. Lewis, R.H.: Yet another graph partitioning problem is NP-Hard. Report. arXiv:1403.5544 [cs.CC] (2014)

  17. Lovász, L., Schrijver, A.: Cones of matrices and set-functions and 0–1 optimization. SIAM J. Optim. 1(2), 166–190 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  18. Martí, R., Campos, V., Piñana, E.: A branch and bound algorithm for the matrix bandwidth minimization. Eur. J. Oper. Res. 186(2), 513–528 (2008)

    Article  MATH  Google Scholar 

  19. Povh, J., Rendl, F.: Approximating non-convex quadratic programs by semidefinite and copositive programming. In: KOI 2006—11th International Conference on Operational Research, pp. 35–45. Croatian Operational Research Review, Zagreb (2008)

  20. Rendl, F., Wolkowicz, H.: Applications of parametric programming and eigenvalue maximization to the quadratic assignment problem. Math. Program. 53(1, Ser. A), 63–78 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  21. Rendl, F., Wolkowicz, H.: A projection technique for partitioning the nodes of a graph. Ann. Oper. Res. 58, 155–179 (1995). Applied mathematical programming and modeling, II (APMOD 93) (Budapest, 1993)

    Article  MathSciNet  MATH  Google Scholar 

  22. Rendl, F., Lisser, A., Piacentini, M.: Bandwidth, vertex separators and eigenvalue optimization. In: Discrete Geometry and Optimization. The Fields Institute for Research in Mathematical Sciences. Communications Series. pp. 249–263. Springer, New York (2013)

  23. Schrijver, A.: Theory of Linear and Integer Programming. Wiley-Interscience Series in Discrete Mathematics. Wiley, Chichester (1986)

    Google Scholar 

  24. Sherali, H.D., Adams, W.P.: Computational advances using the reformulation-linearization technique (rlt) to solve discrete and continuous nonconvex problems. Optima 49, 1–6 (1996)

    Google Scholar 

  25. Tardos, E.: A strongly polynomial algorithm to solve combinatorial linear programs. Oper. Res. 34(2), 250–256 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  26. Tardos, E.: Strongly polynomial and combinatorial algorithms in optimization. In: Proceedings of the International Congress of Mathematicians, Vol. I, II (Kyoto, 1990), pp. 1467–1478. Mathematical Society of Japan, Tokyo (1991)

  27. Tütüncü, R.H., Toh, K.C., Todd, M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. 95(2, Ser. B), 189–217 (2003). Computational semidefinite and second order cone programming: the state of the art

    Article  MathSciNet  MATH  Google Scholar 

  28. Wolkowicz, H., Zhao, Q.: Semidefinite programming relaxations for the graph partitioning problem. Discrete Appl. Math. 96(97), 461–479 (1999). Selected for the special Editors’ Choice, Edition 1999

    Article  MathSciNet  Google Scholar 

  29. Zhao, Q., Karisch, S.E., Rendl, F., Wolkowicz, H.: Semidefinite programming relaxations for the quadratic assignment problem. J. Comb. Optim. 2(1), 71–109 (1998). Semidefinite programming and interior-point approaches for combinatorial optimization problems (Fields Institute, Toronto, ON, 1996)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

T. K. Pong was supported partly by a research grant from Hong Kong Polytechnic University. He was also supported as a PIMS postdoctoral fellow at Department of Computer Science, University of British Columbia, Vancouver, during the early stage of the preparation of the manuscript. Research of H. Sun supported by an Undergraduate Student Research Award from The Natural Sciences and Engineering Research Council of Canada. Research of N. Wang supported by The Natural Sciences and Engineering Research Council of Canada and by the U.S. Air Force Office of Scientific Research. H. Wolkowincz: Research supported in part by The Natural Sciences and Engineering Research Council of Canada and by the U.S. Air Force Office of Scientific Research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Henry Wolkowicz.

Additional information

Presented at Retrospective Workshop on Discrete Geometry, Optimization and Symmetry, November 24–29, 2013, Fields Institute, Toronto, Canada.

Appendix: Notation for the SDP relaxation

Appendix: Notation for the SDP relaxation

In this appendix, we describes the constraints of the SDP relaxation (5.3) in detail.

  1. 1.

    The arrow linear transformation acts on \(\mathcal {S}^{kn+1}\),

    $$\begin{aligned} \mathrm{arrow\,}(Y) := {{\mathrm{{diag}}}}(Y) - (0,Y_{0,1:kn})^T, \end{aligned}$$
    (9.1)

    \(Y_{0,1:kn}\) is the vector formed from the last kn components of the first row (indexed by 0) of Y. The arrow constraint represents \(X \in \mathcal {Z} \).

  2. 2.

    The norm constraints for \(X\in \mathcal {E} \) are represented by the constraints with the two \((kn+1) \times (kn+1)\) matrices

    $$\begin{aligned} D_1:= & {} \left[ \begin{array}{c@{\quad }c} n &{} -e_{k}^T \otimes e_{n}^T \\ -e_{k} \otimes e_{n} &{} (e_{k}e_{k}^T) \otimes I_{n} \end{array} \right] ,\\ D_2:= & {} \left[ \begin{array}{c@{\quad }c} m^Tm &{} -m^T \otimes e_{n}^T \\ -m \otimes e_{n} &{} I_{k} \otimes (e_{n}e_{n}^T) \end{array} \right] , \end{aligned}$$

    where \(e_j\) is the vector of ones of dimension j.

  3. 3.

    We let \({{\mathcal {G}}} _J\) represent the gangster operator on \({\mathcal {S}}^{kn+1}\), i.e., it shoots holes/zeros in a matrix,

    $$\begin{aligned} ({{\mathcal {G}}}_{J}(Y))_{ij}:= & {} \left\{ \begin{array}{l@{\quad }l} Y_{ij} &{} \hbox {if}~ (i,j)~ \hbox {or}~(j,i)~\in J\\ 0 &{} \hbox {otherwise,} \end{array} \right. \nonumber \\ J:= & {} \left\{ (i,j): i=(p-1)n+q,~~j=(r-1)n+q,\right. \nonumber \\&\left. \hbox {for}~~\begin{array}{l} p<r,~ p,r \in \{1,\ldots ,k\}\\ q \in \{1,\ldots ,n\} \end{array} \right\} . \end{aligned}$$
    (9.2)

    The gangster constraint represents the (Hadamard) orthogonality of the columns of X. The positions of the zeros are the diagonal elements of the off-diagonal blocks \({\bar{Y}}_{(ij)}, 1<i<j,\) of Y; see the block structure in (9.3) below.

  4. 4.

    Again, by abuse of notation, we use the symbols for the sets of constraints \(\mathcal {D} _O,\mathcal {D} _e\) to represent the linear transformations in the SDP relaxation (5.3). Note that

    $$\begin{aligned} \langle \Psi , X^TX \rangle = {{\mathrm{{trace}}}}\, I X \Psi X^T = {{\mathrm{{vec}}}}(X)^T (\Psi \otimes I) {{\mathrm{{vec}}}}(X). \end{aligned}$$

    Therefore, the adjoint of \(\mathcal {D} _O\) is made up of a zero row/column and \(k^2\) blocks that are multiples of the identity:

    If Y is blocked appropriately as

    $$\begin{aligned} Y= \begin{bmatrix} Y_{00} |&Y_{0,:} \\ \hline Y_{:,0} |&{\bar{Y}} \end{bmatrix}, \quad {\bar{Y}} = \begin{bmatrix} {\bar{Y}}_{(11)}&{\bar{Y}}_{(12)}&\cdots&{\bar{Y}}_{(1k)}\\ {\bar{Y}}_{(21)}&{\bar{Y}}_{(22)}&\cdots&{\bar{Y}}_{(2k)}\\ \vdots&\ddots&\ddots&\vdots \\ {\bar{Y}}_{(k1)}&\ddots&\ddots&{\bar{Y}}_{(kk)} \end{bmatrix}, \end{aligned}$$
    (9.3)

    with each \({\bar{Y}}_{(ij)}\) being a \(n\times n\) matrix, then

    $$\begin{aligned} \mathcal {D} _O(Y)= \left( {{\mathrm{{trace}}}}\, {\bar{Y}}_{(ij)}\right) \in \mathcal {S}^{k}. \end{aligned}$$
    (9.4)

    Similarly,

    $$\begin{aligned} \langle \phi , {{\mathrm{{diag}}}}(XX^T) \rangle =\langle {{\mathrm{{Diag}}}}(\phi ), XX^T \rangle ={{\mathrm{{vec}}}}(X)^T\left( I_k \otimes {{\mathrm{{Diag}}}}(\phi )\right) {{\mathrm{{vec}}}}(X). \end{aligned}$$

    Therefore we get the sum of the diagonal parts

    $$\begin{aligned} \mathcal {D} _e(Y)= \sum _{i=1}^k {{\mathrm{{diag}}}}{\bar{Y}}_{(ii)} \in {\mathbb {R}}^n. \end{aligned}$$
    (9.5)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pong, T.K., Sun, H., Wang, N. et al. Eigenvalue, quadratic programming, and semidefinite programming relaxations for a cut minimization problem. Comput Optim Appl 63, 333–364 (2016). https://doi.org/10.1007/s10589-015-9779-8

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-015-9779-8

Keywords

Mathematics Subject Classification