Abstract
Small-bias probability spaces have wide applications in pseudorandomness which naturally leads to the study of their limitations. Constructing a polynomial complexity hitting set for read-once CNF formulas is a basic open problem in pseudorandomness. We show in this paper that this goal is not achievable using small-bias spaces. Namely, we show that for each read-once CNF formula F with probability of acceptance p and with m clauses each of size c, there exists a δ-biased distribution μ on {0, 1}n such that δ = 2−Ω(logm log(1/p)) and no element in the support of μ satisfies F, where n = m c (assuming that \(e^{-\sqrt {m}}\leq p \leq p_{0}\), where p 0 > 0 is an absolute constant). In particular if p = n −Θ(1), the needed bias is 2−Ω(log 2 n), which requires a hitting set of size \(2^{\Omega (\log ^{2}{n})}\). Our lower bound on the needed bias is asymptotically tight. The dual version of our result asserts that if \(f_{low}:\{0, 1\}^{n}\rightarrow \mathbb {R}\) is such that and E[f l o w ] > 0 and f l o w (x) ≤ 0 for each x ∈ {0, 1}n such that F(x) = 0, then the L1-norm of the Fourier transform of f l o w is at least E[f l o w ]2Ω(logm log(1/p)). Our result extends a result due to De, Etesami, Trevisan, and Tulsiani (APPROX-RANDOM 2010) who proved that the small-bias property is not enough to obtain a polynomial complexity PRG for a family of read-once formulas of Θ(1) probability of acceptance.
Similar content being viewed by others
Notes
Let F be a DNF formula with m AND gates and probability of acceptance larger than 𝜖. By the union bound, at least one AND gate of F must have probability of acceptance larger than 𝜖/m. Since any δ-biased distribution is a δ-hitting distribution for AND gates (e.g., Lemma 1 in [3]), we get that any 𝜖/m-biased distribution is an 𝜖-hitting distribution for F.
Throughout the paper, log means log2.
Note that typically the t’th q-Krawtchouk polynomial is defined as
$$k_{t}^{m,q}(w):= \sum\limits_{a} \binom{w}{a} \binom{m-w}{t-a}(-1)^{a}(q-1)^{t-a}. $$Our normalized definition is related to the classical definition via:
$$ \mathcal{K}_{t}^{(m,c)}(w) = \mathcal{K}_{w}^{(m,c)}(t) = \frac{1}{\binom{m}{t}(2^{c}-1)^{t}}k_{t}^{m,2^{c}}(w), $$where the first equality follows from (6). We adopt the normalized version for technical convenience.
References
Alon, N., Ben-Eliezer, I., Krivelevich, M.: Small sample spaces cannot fool low degree polynomials. APPROX-RANDOM 2008, 266 - 275 (2008)
Andreev, A., Clementi, A.E., Rolim, J.D.: A new general derandomization method. J. ACM 45(1), 179–213 (1998)
Alon, N., Goldreich, O., Hastad, J., Peralta, R.: Simple constructions of almost k-wise independent random variables. Random Struct. Algoritm. 3(3), 289–304 (1992)
Bazzi, L.: Minimum distance of error correcting codes versus encoding complexity, symmetry, and pseudorandomness. Ph.D. dissertation MIT (2003)
Bazzi, L.: Polylogarithmic independence can fool DNF formulas. In: Proceedings of 48th Annual IEEE Symposium on Foundations of Computer Science, pp 63–73 (2007)
Bazzi, L.: Polylogarithmic independence can fool DNF formulas. SIAM J. Comput. 38(6), 2220–2272 (2009)
Braverman, M.: Poly-logarithmic independence fools AC0 circuits. J. ACM 57 (5) (2010)
Blum, M., Micali, S.: How to generate cryptographically strong sequences of pseudo-random bits. SIAM J Comput. 13(4), 850–864 (1984)
De, A., Etesami, O., Trevisan, L., Tulsiani, M.: Improved pseudorandom generators for depth 2 circuits. APPROX-RANDOM 2010, 504–517 (2010)
Gopalan, P., Meka, R., Reingold, O., Trevisan, L., Vadhan, S.: Better pseudorandom generators from milder pseudorandom restrictions. In: Proceedings of the 53rd IEEE symposium on foundations of computer science, pp 120–129 (2012)
Impagliazzo, R., Wigderson, A.: P = BPP if E requires exponential circuits: derandomizing the XOR lemma. In: Proceedings of 29th annual ACM symposium on the theory of computing, pp 220–229 (1997)
Linial, N., Nisan, N.: Approximate inclusion-exclusion. Combinatorica 10 (4), 349–365 (1990)
Lovett, S., Reingold, O., Trevisan, L., Vadhan, S.: Pseudorandom bit generators that fool modular sums. APPROX-RANDOM, 2009, 615–630 (2009)
Luby, M.: A simple parallel algorithm for the maximal independent set problem. In: Proceedings of 17th annual ACM symposium on the theory of computing, pp 1–10 (1985)
Luby, M., Velickovic, B., Wigderson, A.: Deterministic approximate counting of depth-2 circuits. In: Proceedings of the 2nd ISTCS, pp. 18–24 (1993)
Meka, R., Zuckerman, D.: Small-bias spaces for group products. APPROX-RANDOM 2009, 658–672 (2009)
Nisan, N.: Pseudorandom bits for constant depth circuits. Combinatorica 12 (4), 63–70 (1991)
Naor, J., Naor, M.: Small bias probability spaces: efficient constructions and applications. SIAM J. Comput. 22(4), 838–856 (1993)
Nisan, N., Wigderson, A.: Hardness vs. randomness. In: Proceedings of 29th IEEE symposium on foundations of computer science, pp 2–11 (1988)
Paturi, R.: On the degree of polynomials that approximate symmetric boolean functions. In: Proceedings of 24th annual ACM symposium on the theory of computing, pp 468–474 (1992)
Razborov, A.: A simple proof of Bazzi’s theorem. ACM Trans. Comput. Theory 1(1), 1–5 (2009)
Sima, J., Zak, S.: Almost k-wise independent sets establish hitting sets for width 3 1-branching programs. In: 6th international computer science symposium in Russia, pp 120–133 (2011)
Szego, S.: Orthogonal polynomials, 4th edn., vol. 23. Colloquium Publications, Providence (1975). Amer. Math. Soc.
Trefethen, L., Weideman, J.: Two results on polynomial interpolation in equally spaced points. J. Approximation Theory 65(3), 247–260 (1991)
Trevisan, L., Xue, T.: A derandomized switching lemma and an improved derandomization of AC0. IEEE Conference on Computational Complexity (CCC), 242–247 (2013)
Vazirani, U.: Randomness, adversaries, and computation. Ph.D. dissertation, University of California, Berkeley (1986)
Yao, A.C.: Theory and application of Trapdoor functions. In: Proceedings of 23rd IEEE annual symposium on foundations of computer science, pp 80–91 (1982)
Acknowledgments
We would like to thank the anonymous referees for their detailed and constructive comments which improved the presentation of the paper. We are grateful to an anonymous referee for suggesting the simple proof of Lemma 7 presented here.
Author information
Authors and Affiliations
Corresponding author
Additional information
Research supported by FEA URB grant Program Number 288309, American University of Beirut.
Appendix A
Appendix A
1.1 Proof of Lemma 1
Lemma 1
If μ is a probability distribution on {0, 1} n such that \(\mu (F_{\mathfrak {p}}=1)=0\) , then there exists a \(\mathfrak {p}\) -symmetric probability distribution μ ∗ on {0, 1} n such that \(\mu ^{*}(F_{\mathfrak {p}}=1)=0\) and bias(μ ∗ )≤bias(μ).
Proof
Let \(G\subset GL_{n}(\mathbb {F}_{2})\) be the group of n×n invertible \(\mathfrak {p}\)-block permutation matrices over \(\mathbb {F}_{2}\), i.e., G consists of the invertible n×n matrices T over \(\mathbb {F}_{2}\) such that: ∃ a permutation π : [m]→[m] and invertible c×c matrices \(T^{(1)},\ldots , T^{(m)} \in GL_{c}(\mathbb {F}_{2})\) such that for each j ∈ [m], we have \((T x)|_{\mathfrak {p}(\pi (j))} = T^{(j)} (x|_{\mathfrak {p}(j)})\). Thus T is uniquely determined by the permutation π and the matrices T (1),…,T (m).
For T ∈ G, define the probability distribution μ T on {0, 1}n as μ T (x): = μ(T x). Symmetrize μ by averaging: define the probability distribution μ ∗ on {0, 1}n as μ ∗(x): = E T ∈ G μ T (x). The key points are:
-
i)
\(W_{\mathfrak {p}}(x) = W_{\mathfrak {p}}(Tx)\), \(\forall x\in {\mathbb {F}_{2}^{n}}\) and ∀T ∈ G.
This follows from the fact that the matrices T (1),…,T (m) are invertible.
-
ii)
Conversely, \(\forall x,y\in {\mathbb {F}_{2}^{n}}\) such that \(W_{\mathfrak {p}}(x) = W_{\mathfrak {p}}(y)\), ∃T ∈ G such that y = T x.
To construct T, choose the permutation π to arbitrarily map the clauses satisfied by x to those satisfied by y, i.e., \(x|_{\mathfrak {p}(j)} \neq 0\) iff \(y|_{\mathfrak {p}(\pi (j))} \neq 0\). Then for each j ∈ [m], choose T (j) so that \( T^{(j)} (x|_{\mathfrak {p}(j)})=y|_{\mathfrak {p}(\pi (j))}\).
-
iii)
b i a s(μ T ) = b i a s(μ) for each T ∈ G since the matrices in G are invertible.
This follows from the fact that for any invertible matrix \(T \in GL_{n}(\mathbb {F}_{2})\), we have \(bias_{z}(\mu _{T})= bias_{{T^{-1}}^{*}z}(\mu )\), where ∗ is the transpose operator. Namely,
$$\begin{array}{@{}rcl@{}} bias_{z}(\mu_{T})&=& \sum\limits_{x}\mu(Tx)(-1)^{\langle x,z\rangle} \\ & =& \sum\limits_{x}\mu(x)(-1)^{\langle T^{-1}x,z\rangle} \\ &=& \sum\limits_{x}\mu(x)(-1)^{\langle x,{T^{-1}}^{*}z\rangle} \text{(since \(\langle T^{-1}x,z\rangle = \langle x,{T^{-1}}^{*}z\rangle\))}\\ &=& bias_{{T^{-1}}^{*}z}(\mu). \end{array} $$
Since \(\mu (F_{\mathfrak {p}}=1)=0\), it follows from (i) that \(\mu _{T}(F_{\mathfrak {p}}=1)=0\) for each T ∈ G. Hence \(\mu ^{*}(F_{\mathfrak {p}}=1)=0\). The fact that μ ∗ is \(\mathfrak {p}\)-symmetric follows from (ii).
Finally, for each nonzero z ∈ {0, 1}n, we have b i a s z (μ ∗) = E T ∈ G b i a s z (μ T ), hence |b i a s z (μ ∗)|≤ maxT ∈ G|b i a s z (μ T )|≤ maxT ∈ G b i a s(μ T ) = b i a s(μ), where the last equality follows from (iii). Therefore, b i a s(μ ∗) ≤ b i a s(μ). □
Rights and permissions
About this article
Cite this article
Bazzi, L., Nahas, N. Small-Bias is Not Enough to Hit Read-Once CNF. Theory Comput Syst 60, 324–345 (2017). https://doi.org/10.1007/s00224-016-9680-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00224-016-9680-6