Skip to main content
Log in

On the solution of nonconvex cardinality Boolean quadratic programming problems: a computational study

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

This paper addresses the solution of a cardinality Boolean quadratic programming problem using three different approaches. The first transforms the original problem into six mixed-integer linear programming (MILP) formulations. The second approach takes one of the MILP formulations and relies on the specific features of an MILP solver, namely using starting incumbents, polishing, and callbacks. The last involves the direct solution of the original problem by solvers that can accomodate the nonlinear combinatorial problem. Particular emphasis is placed on the definition of the MILP reformulations and their comparison with the other approaches. The results indicate that the data of the problem has a strong influence on the performance of the different approaches, and that there are clear-cut approaches that are better for some instances of the data. A detailed analysis of the results is made to identify the most effective approaches for specific instances of the data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. The term linear programming based branch & bound (B&B) refers to implicit enumeration algorithms that use linear programming relaxations to calculate bounds. This term is used to distinguish from recent semidefinite based B&B algorithms [1].

  2. CPLEX options: qtolin = 1 - automatic linearization of the nonlinear terms; qtolin =0 - convexification of the objective function. GUROBI options: preqlinearize = 1, premiqpmethod = 1 - automatic linearization of the nonlinear terms; preqlinearize = 0, premiqpmethod = 0 - convexification of the objective function.

  3. For the larger problems with formulation (F4), the MILP solvers are not able to solve the relaxation at the root node within the time limit of 7200 s. One way to address this issue is to instruct the MILP solver to use at the root node only the Barrier solver using the parallelization option and a large number of threads, e.g. 40 threads. With this approach the time to solve the relaxation decreases substantially and the solver can start the tree-search, however, the solver still has to handle a large LP problem at each node.

References

  1. Krislock, N., Malick, J., Roupin, F.: Improved semidefinite bounding procedure for solving max-cut problems to optimality. Math. Program. Ser. A 143, 61–86 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  2. Wolsey, L.A.: Integer Programming. Wiley Series in Discrete Mathematics and Optimization. Wiley, Chichester (1998)

  3. Loiola, E.M., Abreu, N.M.M., Boaventura-Netto, P.O., Hahn, P., Querido, T.: A survey for the quadratic assignment problem. Eur. J. Oper. Res. 176(2), 657–690 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  4. Billionnet, A., Costa, M.C., Sutter, A.: An efficient algorithm for a task allocation problem. J. ACM 39(3), 502–518 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  5. Phillips, A.T., Rosen, J.B.: A quadratic assignment formulation of the molecular conformation problem. J. Glob. Optim. 4(2), 229–241 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  6. Barahona, F., Junger, M., Reinelt, G.: Experiments in quadratic 0–1 programming. Math. Program. 44(2), 127–137 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  7. Caprara, A.: Constrained 0–1 quadratic programming: basic approaches and extensions. Eur. J. Oper. Res. 187(3), 1494–1503 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. Padberg, M.: The boolean quadric polytope—some characteristics, facets and relatives. Math. Program. 45(1), 139–172 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  9. Billionnet, A.: Different formulations for solving the heaviest k-subgraph problem. INFOR 43(3), 171–186 (2005)

    MathSciNet  Google Scholar 

  10. Pisinger, D.: Upper bounds and exact algorithms for p-dispersion problems. Comput. Oper. Res. 33(5), 1380–1398 (2006)

    Article  MATH  Google Scholar 

  11. Marti, Rafael, Gallego, Micael, Duarte, Abraham: A branch and bound algorithm for the maximum diversity problem. Eur. J. Oper. Res. 200(1), 36–44 (2010)

    Article  MATH  Google Scholar 

  12. Malick, J., Roupin, F.: Solving k-cluster problems to optimality with semidefinite programming. Math. Program. 136(2, SI), 279–300 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Bertsimas, D., Shioda, R.: Algorithm for cardinality-constrained quadratic optimization. Comput. Optim. Appl. 43(1), 1–22 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  14. IBM ILOG CPLEX Optimization Studio - CPLEX Users Manual - Version 12 Release 6. IBM (2013)

  15. Bruglieri, M., Ehrgott, M., Hamacher, H.W., Maffioli, F.: An annotated bibliography of combinatorial optimization problems with fixed cardinality constraints. Discret. Appl. Math. 154(9), 1344–1357 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  16. Porn, R., Nissfolk, O., Jansson, F., Westerlund, T.: The Coulomb glass—modeling and computational experience with a large scale 0-1 QP problem. In: Pistikopoulos, E.N., Georgiadis, M.C., Kokossis, A.C. (eds.) 21st European Symposium on Computer Aided Process Engineering, vol. 29, pp. 658–662 (2011)

  17. GUROBI Documentation. GUROBI (2016). http://www.gurobi.com/documentation/6.0/. Accessed 29 May 2016

  18. FICO Xpress Optimization Suite—Xpress-Optimizer—Reference manual. FICO (2009). http://www.fico.com/en/wp-content/secure_upload/Xpress-Optimizer-Refernce-Manual. Accessed 29 May 2016

  19. Achterberg, T.: SCIP solving constraint integer programs. Math. Program. 1(1), 1–41 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  20. Androulakis, I.P., Maranas, C.D., Floudas, C.A.: \(\alpha \)BB: A global optimization method for general constrained nonconvex problems. J. Glob. Optim. 7(4), 337–363 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  21. Inc. Lindo systems. LINDOGlobal (2007). https://www.gams.com/help/index.jsp. Accessed 30 May 2016

  22. Belotti, P., Lee, J., Liberti, L., Margot, F., Waechter, A.: Branching and bounds tightening techniques for non-convex MINLP. Optim. Method. Softw. 24(4–5), 597–634 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  23. Tawarmalani, M., Sahinidis, N.V.: A polyhedral branch-and-cut approach to global optimization. Math. Program. 103, 225–249 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  24. Misener, Ruth, Floudas, Christodoulos A.: ANTIGONE: algorithms for continuous/integer global optimization of nonlinear equations. J. Glob. Optim. 59(2–3), 503–526 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  25. Glover, F., Woolsey, E.: Converting 0–1 polynomial programming problem to a 0–1 linear program. Oper. Res. 2(1), 180–182 (1974)

    Article  MATH  Google Scholar 

  26. Raman, R., Grossmann, I.E.: Relation between MILP modeling and logical inference for chemical process synthesis. Comput. Chem. Eng. 15(2), 73–84 (1991)

    Article  Google Scholar 

  27. Sherali, H.D., Adams, W.P.: A reformulation-linearization technique for solving discrete and continuous nonconvex problems. Springer US (1999)

  28. Billionnet, A., Elloumi, S.: Using a mixed integer quadratic programming solver for the unconstrained quadratic 0–1 problem. Math. Program. 109(1), 55–68 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  29. Pisinger, D.: The quadratic knapsack problem—a survey. Discret. Appl. Math. 155(5), 623–648 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  30. Mehrotra, A.: Cardinality constrained Boolean quadratic polytope. Discret. Appl. Math. 79(1–3), 137–154 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  31. Boros, E., Hammer, P.L.: Cut-polytopes, boolean quadric polytopes and nonnegative quadratic pseudo-boolean functions. Math. Oper. Res. 18(1), 245–253 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  32. Pardalos, P.M., Rodgers, G.P.: Computational aspects of a branch and bound algorithm for quadratic zero-one programming. Computing 45(2), 131–144 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  33. Sherali, H.D., Lee, Y.H., Adams, W.P.: A simultaneous lifting strategy for identifying new classes of facets for the boolean quadric polytope. Oper. Res. Lett. 17(1), 19–26 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  34. Gueye, S., Michelon, P.: “Miniaturized” linearizations for quadratic 0/1 problems. Ann. Oper. Res. 140(1), 235–261 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  35. Gueye, S., Michelon, P.: A linearization framework for unconstrained quadratic (0–1) problems. Discret. Appl. Math. 157(6), 1255–1266 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  36. Hansen, P., Meyer, C.: Improved compact linearizations for the unconstrained quadratic 0–1 minimization problem. Discrete Applied Mathematics 157, 1267–1290 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  37. Liberti, L.: Compact linearization for binary quadratic problems. 4OR 5(3), 231–245 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  38. Burer, S., Letchford, A.N.: On nonconvex quadratic programming with box constraints. SIAM J. Optim. 20(2), 1073–1089 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  39. Johnson, E.L., Mehrotra, A., Nemhauser, G.L.: Min-cut clustering. Math. Program. 62(1), 133–151 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  40. Faye, A., Trinh, Q.: Polyhedral results for a constrained quadratic 0-1 problem. Technical Report CEDRIC-03-511, CEDRIC laboratory, CNAM-Paris, France (2003)

  41. Faye, A., Trinh, Q.A.: A polyhedral approach for a constrained quadratic 0–1 problem. Discret. Appl. Math. 149(1–3), 87–100 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  42. Macambira, E.M., de Souza, C.C.: The edge-weighted clique problem: valid inequalities, facets and polyhedral computations. Eur. J. Oper. Res. 123(2), 346–371 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  43. Glover, F.: Improved linear integer programming formulations of nonlinear integer problems. Manag. Sci. 22, 455–460 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  44. Rothberg, E.: An evolutionary algorithm for polishing mixed integer programming solutions. INFORMS J. Comput. 19(4), 534–541 (2007)

    Article  MATH  Google Scholar 

  45. Brooke, A., Kendrick, D., Meeraus, A., Raman, R.: GAMS—a user’s guide (1998)

  46. Bliek, C., Bonami, P., Lodi, A.: Solving mixed-integer quadratic programming problems with IBM-CPLEX: a progress report. In: Proceedings of the Twenty-Sixth RAMP Symposium, pp. 171–180. Hosei University, Tokyo, Japan (2014)

  47. Billionnet, A., Elloumi, S., Plateau, M.C.: Improving the performance of standard solvers for quadratic 0–1 programs by a tight convex reformulation: The QCR method. Discret. Appl. Math. 157(6), 1185–1197 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  48. Dinh, T.P., Canh, N.N., Le, T., An, H.: An efficient combined DCA and B&B using DC/SDP relaxation for globally solving binary quadratic programs. J. Glob. Optim. 48(4), 595–632 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  49. Rendl, F., Rinaldi, G., Wiegele, A.: Solving max-cut to optimality by intersecting semidefinite and polyhedral relaxations. Math. Program. Ser. A 121, 307–335 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  50. Lima, R.M., Grossmann, I. E.: Chemical engineering greetings to Prof. Sauro Pierucci, chapter. Computational advances in solving Mixed Integer Linear Programming problems, pp. 151–160, AIDIC (2011)

  51. Bixby, R., Rothberg, E.: Progress in computational mixed integer programming—a look back from the other side of the tipping point. Ann. Oper. Res. 49(1), 37–41 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  52. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  53. Bussieck, M.R., Dirkse, S.P., Vigerske, S.: Paver 2.0: an open source environment for automated performance analysis of benchmarking data. J. Glob. Optim. 59(2–3), 259–275 (2014)

    Article  MATH  Google Scholar 

  54. Lima, R.M.: IBM ILOG CPLEX What is inside of the box? (2010). http://egon.cheme.cmu.edu/ewocp/docs/rlima_cplex_ewo_dec2010. Accessed 22 Oct 2015

Download references

Acknowledgments

The first author acknowledges the support of the Center for Uncertainty Quantification in Computational Science & Engineering. This research used resources in the King Abdullah University of Science and Technology Scientific Computing Center, supported by the Information Technology, Research Computing Team. The authors would like to thank the Center for Advanced Process Decision-making at Carnegie Mellon University for their financial support. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007- 2013) under grant agreement n. PCOFUND-GA-2009-246542 and from the Fundação para a Ciência e Tecnologia, Portugal, under Grant agreement n. DFRH/WIIA/67/2011.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ricardo M. Lima.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 5627 KB)

Appendices

Appendix 1: A note on formulation (F3)

The validity of formulation (F3) can be established as follows. The constraint \(z_{ij} \ge x_{i}+ x_{j} - 1\) is not considered in (F3), which forces \(z_{ij}=1\) when \(x_{i}=1\) and \(x_{j}=1\). In formulation (F3), this relation is enforced by constraint (10). To demonstrate this, consider that \(M=3\), \(x_{j}=1\), \(x_{k}=1\), \(x_{l}=1\), \(x_{i}=0, \forall \, i \ne j,i\ne k,i\ne l\), and \(j<k<l\) then the constraint in (10) results in

$$\begin{aligned}&\sum \limits _{s< j}{z_{sj}} + \sum \limits _{s>j}{z_{js}}= 2, \end{aligned}$$
(MIQP$_\textrm {F}$)
$$\begin{aligned}&\sum \limits _{s< k}{z_{sk}} + \sum \limits _{s>k}{z_{ks}}= 2, \end{aligned}$$
(QP$_{\textrm {Fu)
$$\begin{aligned}&\sum \limits _{s< l}{z_{sl}} + \sum \limits _{s>l}{z_{ls}}= 2, \end{aligned}$$
(MIQP$_{\text {Fu)
$$\begin{aligned}&\sum \limits _{s<i}{z_{is}} + \sum \limits _{s>i}{z_{is}}= 0, \qquad \forall \, i\ne j,i\ne k,i\ne l, \end{aligned}$$
(15)

where in each (15), (16), and (17) there are only two positive variables z, because the fourth set of equations forces all the other variables to zero, leading to \(z_{jk}=1\), \(z_{jl}=1\), \(z_{kl}=1\), which is a solution of the original problem. Note that in (18), we have \(x_{i}=0\) and then the \(rhs=0\).

Appendix 2: Formulation (F6)

Formulation (F6) compared with the other formulations has some additional elements that require some clarification. Therefore, in this appendix, we derive formulation (F6), and we explain the meaning of the variable \(t_{i}\), and the parameters \(L_{i}\) and \(U_{i}\). The derivation of the formulation is based on the linearization technique proposed in [43], and on one of the formulations derived in [9].

We start by defining the CBQP problem of interest as

$$\begin{aligned} \begin{array}{rl} \text{ min }&{} \sum \limits _{i}{q_{ii}x_{i}} + \sum \limits _{i}{\sum \limits _{j>i}{(q_{ij} + q_{ji})x_{i}x_{j}}} \\ \text{ subject } \text{ to } &{} \sum \limits _{i}{x_{i}} = M \\ &{} x_{i} \in \{0,1\}, \qquad \forall \, i. \end{array} \end{aligned}$$
(16)

Following Glover [43] we define a new variable \(z_{i}\) that is only indexed by i as

$$\begin{aligned} z_{i} = x_{i}\left[ \sum \limits _{j<i}{(q_{ij} + q_{ji})x_{j}} + \sum \limits _{j>i}{(q_{ij} + q_{ji})x_{j}}\right] , \qquad \forall \, i, \end{aligned}$$
(17)

which is used to replace the quadratic term in the objective function of (19). Next the linearization technique is applied to the product between \(x_{i}\) and the term within the square brackets in (20). For this linearization we need to define the lower and upper bounds for the term inside of the square brackets in (20):

$$\begin{aligned} L_{i} \le \sum \limits _{j<i}{(q_{ij} + q_{ji})x_{j}} + \sum \limits _{j>i}{(q_{ij} + q_{ji})x_{j}} \le U_{i}, \qquad \forall \, i. \end{aligned}$$
(18)

In the context of the problem (19), because of the cardinality constraint, \(L_{i}\) is equal to the sum over j of the \(M-1\) smallest coefficients \(q_{ij} + q_{ji}\), and \(U_{i}\) is equal to the sum over j of the \(M-1\) greatest coefficients \(q_{ij} + q_{ji}\), see [9] for additional details. Using the linearization technique from Glover [43] applied to (20), problem (19) can be re-written as

$$\begin{aligned} \begin{array}{rl} \text{ min }&{} \sum \limits _{i}{q_{ii}x_{i}} + \frac{1}{2} \sum \limits _{i}{z_{i}} \\ \text{ subject } \text{ to } &{} \sum \limits _{i}{x_{i}} = M \\ &{} z_{i} \ge \sum \limits _{j<i}{(q_{ij} + q_{ji})x_{j}} + \sum \limits _{j>i}{(q_{ij} + q_{ji})x_{j}} - U_{i}(1 - x_{i}), \qquad \forall \, i \\ &{} z_{i} \ge L_{i}x_{i}, \qquad \forall \, i \\ &{} x_{i} \in \{0,1\}, \qquad \forall \, i. \end{array} \end{aligned}$$
(19)

Note that the fraction 1 / 2 in the objective function of (22) is needed to prevent the double accounting of \((q_{ij} + q_{ji})\) in \(z_{i}\) and \(z_{j}\) when \(x_{i}=1\) and \(x_{j}=1\). Note also that because we are minimizing, we do not need to include in (22) the other two constraints derived in the linearization technique proposed in [43]:

$$\begin{aligned} z_{i}\le & {} \sum \limits _{j<i}{(q_{ij} + q_{ji})x_{j}} + \sum \limits _{j>i}{(q_{ij} + q_{ji})x_{j}} - L_{i}(1 - x_{i}), \qquad \forall \, i \end{aligned}$$
(20)
$$\begin{aligned} z_{i}\le & {} U_{i}x_{i}, \qquad \forall \, i. \end{aligned}$$
(21)

Defining \(t_{i} = z_{i} - U_{i} x_{i}\) and replacing \(z_{i}\) by \(t_{i} + U_{i} x_{i}\) in problem (22), this problem can be re-written as

$$\begin{aligned} \begin{array}{rl} \min &{} \sum \limits _{i}{q_{ii}x_{i}} + \frac{1}{2} \sum \limits _{i}^{}{t_{i}} + \frac{1}{2} \sum \limits _{i}^{}{U_{i} x_{i}}\\ \text{ subject } \text{ to } &{} \sum \limits _{i}{x_{i}} = M \\ &{} t_{i} \ge \sum \limits _{j<i}{\left( q_{ji}+q_{ij}\right) x_{j}}+ \sum \limits _{j>i}{\left( q_{ij}+q_{ji}\right) x_{j}} - U_{i}, \qquad \forall \, i \\ &{} t_{i} \ge \left( L_{i} - U_{i}\right) x_{i}, \qquad \forall \, i \\ &{} x_{i} \in \{0,1\}, \qquad \forall \, i, \end{array} \end{aligned}$$
(22)

which leads to formulation (F6).

Appendix 3: Performance profiles

The performance profiles presented are based on the work of Dolan and Moré [52], who have proposed an alternative graphical comparison methodology to support the benchmarking of optimization software. This type of profiles are used in several benchmark tests, and they are an alternative to other comparison techniques based on averages and standard deviations, or other statistical indicators.

The performance profiles represent for a given set of results, P, the cumulative distribution function, \(p_{s}(\tau )\), of the performance ratios between the computational time of the instance p solved with the solver s and the minimum of the computational times taken by the other solvers for the same instance. Thus the performance ratio is given by

$$\begin{aligned} \rho _{p,s} = \frac{t_{p,s}}{\min \{t_{p,s}: 1 \le s \le n_{s}\}}\;, \quad \forall \, p,s, \end{aligned}$$
(23)

and the cumulative performance profile is defined as

$$\begin{aligned} p_{s}(\tau ) = \frac{1}{n_{p}}\;\text {size} \left\{ p\in P: \rho _{p,s} \le \tau \right\} , \quad \forall \, s \end{aligned}$$
(24)

where \(t_{p,s}\) is the computational time that solver s needed to solve problem p, \(n_{s}\) is the number of solvers, \(n_{p}\) is the number of instances, and \(\tau \) denotes the ratio of the time factor of interest. The performance ratios with \(\tau =1\) correspond to the solvers with the minimum computational time, and therefore, \(p_s(1)\) is the probability of the solver s to have the best performance on any given problem [52].

Fig. 13
figure 13

Example of performance profile built over the CPU time

As an example consider the performance profiles based on the CPU time in Figure 1, where the profiles for the MILP formulations are presented. These performance profiles are built over the formulations, and not over the solvers. This means that each formulation corresponds to an index s (denoted by solver in the nomenclature above), and thus \(n_{s}=6\), and \(n_{p}= 80\, \mathrm { instances} \times 3\, \mathrm {MILP\, solvers} = 240\). Each instance of these 240 is then solved for each formulation. For each instance the minimum computational time of the six formulations is calculated, and then the computational time of each formulation is divided by the minimum time, this is the performance ratio. Based on the calculation of the performance ratios for all instances, the cumulative profile is then calculated for a set of fixed ratios of time factors. \(p_s(1)\) is the probability of the formulation s to be the best formulation to solve any given problem independently of the solver, and of the parameters of the 80 instances defined for matrix Q. For \(\tau =4\), \(p_s(4)\) is the probability of any formulation to be the best within a time factor of 4 of the best formulation. The performance profiles provide also information about the formulations that fail to solve to optimality a set of problems. This is given by \(1 - p_{s}(\tau )\), and thus, the profiles that do not meet \(p_{s}(\tau )= 1\), mean that there is a probability that they will not solve a subset of problems to optimality within the maximum CPU time set. Figure 13 illustrates a performance profile based on the computational times.

The performance profiles based on the absolute gap [53] are easier to understand and construct than the profiles based on the computational times describe above. A cumulative performance profile using absolute gaps for each solver s is defined as

$$\begin{aligned} p_{s}(Gap (\%)) = \frac{1}{n_{p}}\;\text {size} \left\{ p\in P: \gamma _{p,s} \le Gap (\%) \right\} , \quad \forall \, s, \end{aligned}$$
(F6)

where \(Gap (\%)\) is the optimality gap of interest, and \(\gamma _{p,s}\) is the optimality gap that solver s obtained for problem p. As an example consider the performance profiles based on the optimality gap in Fig. 1, where the profiles for the MILP formulations are presented. These performance profiles are built over the formulations, and not over the solvers. This means that each formulation corresponds to an index s (denoted by solver in the nomenclature above), and thus \(n_{s}=6\), and \(n_{p}= 80\, \mathrm { instances} \times 3\, \mathrm {MILP\, solvers} = 240\). For each formulation the cumulative profile of the optimality gaps of the 240 optimization runs are determined. \(p_s(0)\) is the probability of the formulation s to solve any given problem to optimality independently of the solver, and of the parameters of the 80 instances defined for matrix Q. Figure 14 depicts an absolute profile for optimality gaps.

Fig. 14
figure 14

Example of a performance profile based on the absolute gap

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lima, R.M., Grossmann, I.E. On the solution of nonconvex cardinality Boolean quadratic programming problems: a computational study. Comput Optim Appl 66, 1–37 (2017). https://doi.org/10.1007/s10589-016-9856-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-016-9856-7

Keywords

Navigation