Skip to main content
Log in

Solving Sparse Instances of Max SAT via Width Reduction and Greedy Restriction

  • Published:
Theory of Computing Systems Aims and scope Submit manuscript

Abstract

We present a moderately exponential time and polynomial space algorithm for sparse instances of Max SAT. Our algorithms run in time of the form \(O\left (2^{(1-\mu (c))n}\right )\) for instances with n variables and cn clauses. Our deterministic and randomized algorithm achieve \(\mu (c) = {\Omega }\left (\frac {1}{c^{2}\log ^{2} c}\right )\) and \(\mu (c) = {\Omega }\left (\frac {1}{c \log ^{3} c}\right )\) respectively. Previously, an exponential space deterministic algorithm with \(\mu (c) = {\Omega }\left (\frac {1}{c\log c}\right )\) was shown by Dantsin and Wolpert [SAT 2006] and a polynomial space deterministic algorithm with \(\mu (c) = {\Omega }\left (\frac {1}{2^{O(c)}}\right )\) was shown by Kulikov and Kutzkov [CSR [2007]]. Our algorithms have three new features. They can handle instances with (1) weights and (2) hard constraints, and also (3) they can solve counting versions of Max SAT. Our deterministic algorithm is based on the combination of two techniques, width reduction of Schuler and greedy restriction of Santhanam. Our randomized algorithm uses random restriction instead of greedy restriction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Bansal, N., Raman, V.: Upper bounds for MaxSAT: Further improved. In: Proceedings of the 10th International Symposium on Algorithms and Computation (ISAAC), Lecture Notes in Computer Science, vol. 1741, pp 247–258. Springer (1999)

  2. Binkele-Raible, D., Fernau, H.: A new upper bound for Max-2-SAT: a graph-theoretic approach. J. Discret. Algorithm. 8(4), 388–401 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bliznets, I., Golovnev, A.: A new algorithm for parameterized MAX-SAT. In: Proceedings of the 7th International Symposium on Parameterized and Exact Computation (IPEC), Lecture Notes in Computer Science, vol. 7535, pp 37–48. Springer (2012)

  4. Calabro, C., Impagliazzo, R., Paturi, R.: A duality between clause width and clause density for SAT. In: Proceedings of the 21st Annual IEEE Conference on Computational Complexity (CCC), pp. 252–260 (2006)

  5. Calabro, C., Impagliazzo, R., Paturi, R.: The complexity of satisfiability of small depth circuits. In: Revised Selected Papers from the 4th International Workshop on Parameterized and Exact Computation, Lecture Notes in Computer Science, vol. 5917, pp 75–85 (2009)

  6. Chen, J., Kanj, I.A.: Improved exact algorithms for MAX-SAT. Discret. Appl. Math. 142(1-3), 17–27 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  7. Chen, R., Kabanets, V., Kolokolova, A., Shaltiel, R., Zuckerman, D.: Mining circuit lower bound proofs for meta-algorithms. Electronic Colloquium on Computational Complexity (ECCC) TR13-057 (2013). Also. In: Proceedings of the 29th Annual IEEE Conference on Computational Complexity (CCC), pp. 262–273 (2014)

  8. Dantsin, E., Wolpert, A.: MAX-SAT for formulas with constant clause density can be solved faster than in O(2n) time. In: Proceedings of the 9th International Conference on Theory and Applications of Satisfiability Testing (SAT), Lecture Notes in Computer Science, vol. 4121, pp 266–276. Springer (2006)

  9. Gaspers, S., Sorkin, G.B.: A universally fastest algorithm for Max 2-SAT, Max 2-CSP, and everything in between. J. Comput. Syst. Sci. 78(1), 305–335 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  10. Gramm, J., Hirsch, E.A., Niedermeier, R., Rossmanith, P.: Worst-case upper bounds for MAX-2-SAT with an application to MAX-CUT. Discret. Appl. Math. 130(2), 139–155 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  11. Gramm, J., Niedermeier, R.: Faster exact solutions for MAX2SAT. In: Proceedings of the 4th Italian Conference on Algorithms and Complexity (CIAC), Lecture Notes in Computer Science, vol. 1767, pp 174–186. Springer (2000)

  12. Gutin, G., Yeo, A.: Constraint satisfaction problems parameterized above or below tight bounds: A survey. In: The Multivariate Algorithmic Revolution and Beyond, Lecture Notes in Computer Science, vol. 7370, pp 257–286. Springer (2012)

  13. Håstad, J.: The shrinkage exponent of De Morgan formulas is 2. SIAM. J. Comput. 27(1), 48–64 (1998)

    MathSciNet  MATH  Google Scholar 

  14. Hirsch, E.A.: A new algorithm for MAX-2-SAT. In: Proceedings of the 17th Annual Symposium on Theoretical Aspects of Computer Science (STACS), Lecture Notes in Computer Science, vol. 1770, pp 65–73. Springer (2000)

  15. Hirsch, E.A.: Worst-case study of local search for MAX-k-SAT. Discret. Appl. Math. 130(2), 173–184 (2003)

    Article  MATH  Google Scholar 

  16. Horowitz, E., Sahni, S.: Computing partitions with applications to the knapsack problem. J. ACM 21(2), 277–292 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  17. Impagliazzo, R., Matthews, W., Paturi, R.: A satisfiability algorithm for AC0. In: Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 961–972 (2012)

  18. Impagliazzo, R., Paturi, R., Schneider, S.: A satisfiability algorithm for sparse depth two threshold circuits. In: Proceedings of the 54th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 479–488 (2013)

  19. Kneis, J., Mölle, D., Richter, S., Rossmanith, P.: On the parameterized complexity of exact satisfiability problems. In: Proceedings of the 30th International Symposium on Mathematical Foundations of Computer Science (MFCS), Lecture Notes in Computer Science, vol. 3618, pp 568–579. Springer (2005)

  20. Kneis, J., Mölle, D., Richter, S., Rossmanith, P.: A bound on the pathwidth of sparse graphs with applications to exact algorithms. SIAM. J. Discret. Math 23(1), 407–427 (2009)

    Article  MATH  Google Scholar 

  21. Koivisto, M.: Optimal 2-constraint satisfaction via sum-product algorithms. Inf. Process. Lett. 98(1), 24–28 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  22. Kojevnikov, A., Kulikov, A.S.: A new approach to proving upper bounds for MAX-2-SAT. In: Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 11–17 (2006)

  23. Kulikov, A.S.: Automated generation of simplification rules for SAT and MAXSAT. In: Proceedings of the 8th International Conference on Theory and Applications of Satisfiability Testing (SAT), Lecture Notes in Computer Science, vol. 3569, pp 430–436. Springer (2005)

  24. Kulikov, A.S., Kutzkov, K.: New bounds for MAX-SAT by clause learning. In: Proceedings of the 8th International Computer Science Symposium in Russia (CSR), Lecture Notes in Computer Science, vol. 4649, pp 194–204. Springer (2007)

  25. Mahajan, M., Raman, V.: Parameterizing above guaranteed values: MaxSAT and MaxCUT. J. Algorithm. 31(2), 335–354 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  26. Niedermeier, R., Rossmanith, P.: New upper bounds for maximum satisfiability. J. Algorithm. 36(1), 63–88 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  27. Santhanam, R.: Fighting perebor: New and improved algorithms for formula and QBF satisfiability. In: Proceedings of the 51th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 183–192 (2010)

  28. Schuler, R.: An algorithm for the satisfiability problem of formulas in conjunctive normal form. J. Algorithm. 54(1), 40–44 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  29. Scott, A.D., Sorkin, G.B.: Faster algorithms for MAX CUT and MAX CSP, with polynomial expected time for sparse instances. In: Proceedings of the 6th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX) and the 7th International Workshop on Randomization and Computation (RANDOM), Lecture Notes in Computer Science, vol. 2764, pp 382–395. Springer (2003)

  30. Scott, A.D., Sorkin, G.B.: Linear-programming design and analysis of fast algorithms for Max 2-CSP. Discret. Optim. 4(3-4), 260–287 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  31. Seto, K., Tamaki, S.: A satisfiability algorithm and average-case hardness for formulas over the full binary basis. Comput. Complex. 22(2), 245–274 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  32. Williams, R.: A new algorithm for optimal 2-constraint satisfaction and its implications. Theor. Comput. Sci. 348(2-3), 357–365 (2005)

    Article  MATH  Google Scholar 

  33. Williams, R.: Nonuniform ACC circuit lower bounds. J. ACM 61(1, 2), 1–32 (2014)

Download references

Acknowledgements

We would like to thank the anonymous referee for the helpful comments on an earlier version of this paper. KS was supported in part by MEXT KAKENHI (24106001, 24106003); JSPS KAKENHI (26730007). ST was supported in part by MEXT KAKENHI (24106003); JSPS KAKENHI (25240002, 26330011).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kazuhisa Seto.

Appendix: A Proof of Lemma 2

Appendix: A Proof of Lemma 2

In this appendix, we give the precise proof of Lemma 2 to make this paper self-contained. We stress that all the proofs are due to Chen et al. [7] except that we consider Max formula SAT and \(\widetilde {L}(\cdot )\) instead of De Morgan formula SAT and L(⋅).

Lemma 5 (Restatement of Lemma 2)

Let Φ be an instance of Max formula SAT on n variables. For any k≥4, we have

$$\begin{array}{@{}rcl@{}} \Pr\left[\widetilde{L}({\Phi}_{n-k}) \geq 2\cdot \widetilde{L}({\Phi})\cdot\left(\frac{k}{n}\right)^{3/2}\right]<2^{-k}. \end{array} $$

To prove this lemma, we need some definitions and lemmas.

Definition 1

A sequence of random variables X 0, X 1,…,X n is a supermartingale with respect to a sequence of random variables R 1, R 2,…,R n if \(\mathbf {E}\left [X_{i} | R_{i-1}, \ldots , R_{1}\right ] \leq X_{i-1}\), for 1≤in.

Chen et al. proved the following lemma.

Lemma 6 (Lemma 4.2 in [7])

Let \(\{X_{i}\}^{n}_{i=0}\) be a supermartingale with respect to \(\{R_{i}\}^{n}_{i=0}\) . Let Y i =X i −X i−1 . If, for every 1 ≤ i ≤ n, the random variable Y i (conditioned on R i−1 ,…,R 1 ) assumes two values with equal probability, and there exists a constant c i ≥0 such that Y i ≤c i , then, for any λ, we have,

$$\begin{array}{@{}rcl@{}} \Pr\left[X_{n} - X_{0} \geq \lambda\right] \leq \exp\left(-\frac{\lambda^{2}}{2{\sum}^{n}_{i=1} {c_{i}^{2}}}\right). \end{array} $$

Now, we prove the counterpart of Lemma 4.4 in [7] with respect to \(\widetilde {L}(\cdot )\)

Lemma 7 (Lemma 4.4 in [ 7 ] with respect to \(\widetilde {L}(\cdot )\))

Let Φ be an instance of Max formula SAT on n variables, and let \(x=\arg \max _{x \in \text {var}({\Phi })}\widetilde {\text {freq}}_{\Phi }(x)\) . Then \(\max \{\widetilde {L}({\Phi }[x=0]), \widetilde {L}({\Phi }[x=1])\} \leq \widetilde {L}({\Phi }) \cdot \left (1-\frac {1}{n}\right )\) , and \(\mathbf {E}_{a \in \{0,1\}}\left [\widetilde {L}\left ({\Phi }[x=a]\right )\right ] \leq \widetilde {L}({\Phi }) \cdot \left (1-\frac {1}{n}\right )^{3/2}\).

Proof

First note that in any ϕ i such that L(ϕ i ) ≥ 2 and x ∈ var(ϕ i ), for each leaf labeled with x or \(\overline {x}\), its sibling subformula does not contain x or \(\overline {x}\) according to the procedure of Simplify. Since \(\widetilde {\text {freq}}(x)\ge \frac {\widetilde {L}({\Phi })}{n}\), after assigning 0 or 1 to x, the size of Φ decreases at least \(\frac {\widetilde {L}({\Phi })}{n}\). Furthermore, when we set x to be 0 or 1 uniformly at random, then with probability 1/2, we can remove the sibling of each x or \(\overline {x}\). Such a sibling subformula does not contain x or \(\overline {x}\) and has at least one literal, thus,

$$\begin{array}{@{}rcl@{}} && \mathbf{E}_{a \in \{0,1\}}\left[\widetilde{L}\left({\Phi}[x=a]\right)\right] \leq \widetilde{L}({\Phi}) - \frac{\widetilde{L}({\Phi})}{n} - \frac{1}{2}\cdot\frac{\widetilde{L}({\Phi})}{n}\\ &=& \widetilde{L}({\Phi})\left(1-\frac{3}{2n}\right) \leq \widetilde{L}({\Phi}) \cdot \left(1-\frac{1}{n}\right)^{3/2}. \end{array} $$

Let R i be the random value assigned to the restricted variables in step i. Set \(\widetilde {L}_{i} := \widetilde {L}({\Phi }_{i})\) and \(\ell _{i} := \log \widetilde {L}_{i}\). Define a sequence of random variables {Z i } as follows:

$$\begin{array}{@{}rcl@{}} Z_{i} = \ell_{i}-\ell_{i-1}-\frac{3}{2}\log\left(1-\frac{1}{n-i+1}\right). \end{array} $$

Note that, given R 1 ,…,R i−1, the random variable Z i assumes two values with equal probability.

Lemma 8 (Lemma 4.5 in [ 7 ] with respect to \(\widetilde {L}(\cdot )\))

Let X 0 =0 and \(X_{i} = {\sum }^{i}_{j=1}Z_{j}\) . Then the sequence {X i } is a supermartingale with respect to {R i }, and, for each Z i , we have \(Z_{i} \leq c_{i} := -\frac {1}{2}\log (1-\frac {1}{n-i+1})\).

Proof

By Lemma 7, \(\ell _{i} \leq \ell _{i-1} + \log \left (1-\frac {1}{n-i+1}\right )\). Then, \(Z_{i} = \ell _{i}-\ell _{i-1}-\frac {3}{2}\log \left (1-\frac {1}{n-i+1}\right ) \leq -\frac {1}{2}\log \left (1-\frac {1}{n-i+1}\right ) = c_{i}\). By Jensen’s inequality, \(\mathbf {E}[\ell _{i} | R_{i-1},\ldots , R_{1}] \leq \log \mathbf {E}[\widetilde {L}_{i} | R_{i-1}, \ldots , R_{i}]\), which, by Lemma 7, is at most \(\log \left (\widetilde {L}_{i-1}\cdot \left (1-\frac {1}{n-i+1}\right )^{3/2}\right ) = \ell _{i-1}+\frac {3}{2}\log \left (1-\frac {1}{n-i+1}\right )\); this implies E[Z i |R i−1,…,R i ] ≤ 0, and so {X i } is indeed a supermartingale. □

Now, we ready to prove the proof of Lemma 5.

Proof of Lemma 5

Let λ be arbitrary, and let c i ’s be as defined in Lemma 8. By Lemma 6 and Lemma 8, we get

$$\Pr\left[\sum\limits_{j=1}^{i} Z_{j} \geq \lambda \right] \leq \exp\left(-\frac{\lambda^{2}}{{2\sum}^{i}_{j=1}{c^{2}_{j}}}\right). $$

It is easy to show that \({\sum }^{i}_{j=1} Z_{j} = \ell _{i}-\ell _{0}-\frac {3}{2}\log \frac {n-i}{n}\) from the definition of Z j . Hence,

$$\Pr\left[\sum\limits_{j=1}^{i} Z_{j} \geq \lambda \right] = \Pr\left[\ell_{i}-\ell_{0}-\frac{3}{2}\log\left(\frac{n-i}{n}\right)\geq \lambda\right]=\Pr\left[\widetilde{L}_{i}\geq e^{\lambda} \widetilde{L}_{0} \left(\frac{n-i}{n}\right)^{3/2}\right]. $$

For each 1 ≤ ji, we have \(c_{j} = -\frac {1}{2}\log \left (1-\frac {1}{n-j+1}\right ) \leq \frac {1}{2}\cdot \frac {1}{n-j+1}\), using the inequality \(\log (1+x) \leq x\). Thus, \({\sum }^{i}_{j=1} {c^{2}_{j}}\) is at most

$$\frac{1}{4}\sum\limits_{j=1}^{i}\left(\frac{1}{n-j+1}\right)^{2} \leq \frac{1}{4}\sum\limits_{j=1}^{i}\left(\frac{1}{n-j}-\frac{1}{n-j+1}\right)=\frac{1}{4}\left(\frac{1}{n-i}-\frac{1}{n}\right)\leq\frac{1}{4}\cdot\frac{1}{n-i}. $$

Taking i = nk, we get

$$\Pr\left[\widetilde{L}_{n-k} \geq e^{\lambda}\widetilde{L}_{0}\left(\frac{k}{n}\right)^{3/2}\right] \leq \exp\left(-\frac{\lambda^{2}}{2{\sum}^{n-k}_{j=1}{c^{2}_{j}}}\right) \leq e^{-2\lambda^{2}k}. $$

Choosing \(\lambda = \ln 2\) concludes the proof. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sakai, T., Seto, K. & Tamaki, S. Solving Sparse Instances of Max SAT via Width Reduction and Greedy Restriction. Theory Comput Syst 57, 426–443 (2015). https://doi.org/10.1007/s00224-014-9600-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00224-014-9600-6

Keywords

Navigation