Abstract
The seminal hardcore lemma of Impagliazzo states that for any mildly-hard Boolean function f, there is a subset of input, called the hardcore set, on which the function is extremely hard, almost as hard as a random Boolean function. This implies that the output distribution of f given a random input looks like a distribution with some statistical randomness. Can we have something similar for hard functions with several output bits? Can we say that the output distribution of such a general function given a random input looks like a distribution containing several bits of randomness? If so, one can simply apply any statistical extractor to extract computational randomness from the output of f. However, the conventional wisdom tells us to apply extractors with some additional reconstruction property, instead of just any extractor. Does this mean that there is no analogous hardcore lemma for general functions?
We show that a general hard function does indeed have some kind of hardcore set, but it comes with the price of a security loss which is proportional to the number of output values. More precisely, consider a hard function f:{0, 1}n → [V] = {1,…,V} such that any circuit of size s can only compute f correctly on at most \(\frac{1}{L}(1-\gamma)\) fraction of inputs, for some L ∈ [1,V − 1] and γ ∈ (0,1). Then we show that for some I ⊆ [V] with |I| = L + 1, there exists a hardcore set H I ⊆ f − 1(I) with density \(\gamma/{V \choose L+1}\) such that any circuit of some size s′ can only compute f correctly on at most \(\frac{1+\epsilon}{L+1}\) fraction of inputs in H I . Here, s′ is smaller than s by some poly(V,1/ε,log(1/γ)) factor, which results in a security loss of such a factor. We show that it is basically impossible to guarantee a much larger hardcore set or a much smaller security loss. Finally, we show how our hardcore lemma can be used for extracting computational randomness from general hard functions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.: The non-stochastic multi-armed bandit problem. SIAM J. Comput. 32(1), 48–77 (2002)
Barak, B., Hardt, M., Kale, S.: The Uniform Hardcore Lemma via Approximate Bregman Projectionss. In: SODA 2008, pp. 1193–1200 (2008)
Barak, B., Shaltiel, R., Wigderson, A.: Computational analogues of entropy. In: Proc. APPROX-RANDOM, pp. 200–215 (2003)
Cover, T., Thomas, J.: Elements of Information Theory. Wiley, Chichester (1991)
Goldreich, O., Rubinfeld, R., Sudan, M.: Learning polynomials with queries: the highly noisy case. SIAM J. Disc. Math. 13(4), 535–570 (2000)
Healy, A., Vadhan, S., Viola, E.: Using nondeterminism to amplify hardness. SIAM J. Comput. 35(4), 903–931 (2006)
Holenstein, T.: Key agreement from weak bit agreement. In: STOC 2005, pp. 664–673 (2005)
Hsiao, C.-Y., Lu, C.-J., Reyzin, L.: Conditional computational entropy, or toward separating pseudoentropy from compressibility. In: Naor, M. (ed.) EUROCRYPT 2007. LNCS, vol. 4515, pp. 169–186. Springer, Heidelberg (2007)
Impagliazzo, R.: Hard-core distributions for somewhat hard problems. In: FOCS 1995, pp. 538–545 (1995)
Klivans, A., Servedio, R.A.: Boosting and hard-core sets. Machine Learning 51(3), 217–238 (2003)
Lee, C.-J., Lu, C.-J., Tsai, S.-C.: Deterministic extractors for independent-symbol sources. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4051, pp. 84–95. Springer, Heidelberg (2006)
Lee, C.-J., Lu, C.-J., Tsai, S.-C.: Extracting computational entropy and learning noisy linear functions. In: Ngo, H.Q. (ed.) COCOON 2009. LNCS, vol. 5609, pp. 338–347. Springer, Heidelberg (2009)
Lu, C.-J., Tsai, S.-C., Wu, H.-L.: On the complexity of hard-core set constructions. In: Arge, L., Cachin, C., Jurdziński, T., Tarlecki, A. (eds.) ICALP 2007. LNCS, vol. 4596, pp. 183–194. Springer, Heidelberg (2007)
Nisan, N., Zuckerman, D.: Randomness is linear in space. J. Comput.Syst. Sci. 52(1), 43–52 (1996)
O’Donnell, R.: Hardness amplification within NP. In: STOC, pp. 751–760 (2002)
Shaltiel, R.: Recent developments in explicit constructions of extractors. Bulletin of the EATCS 77, 67–95 (2002)
Sudan, M., Trevisan, L., Vadhan, S.: Pseudorandom generators without the XOR lemma. J. Comput.Syst. Sci. 62(2), 236–266 (2001)
Trevisan, L.: List decoding using the XOR lemma. In: FOCS, pp. 126–135 (2003)
Trevisan, L.: On uniform amplification of hardness in NP. In: STOC, pp. 31–38 (2005)
Yao, A.: Theory and applications of trapdoor functions. In: FOCS 1982, pp. 80–91 (1982)
Zuckerman, D.: General weak random sources. In: FOCS, pp. 534–543 (1990)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lee, CJ., Lu, CJ., Tsai, SC. (2011). Computational Randomness from Generalized Hardcore Sets. In: Owe, O., Steffen, M., Telle, J.A. (eds) Fundamentals of Computation Theory. FCT 2011. Lecture Notes in Computer Science, vol 6914. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22953-4_7
Download citation
DOI: https://doi.org/10.1007/978-3-642-22953-4_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-22952-7
Online ISBN: 978-3-642-22953-4
eBook Packages: Computer ScienceComputer Science (R0)