Abstract.
We prove a version of the derandomized Direct Product Lemma for deterministic space-bounded algorithms. Suppose a Boolean function \(g : \{0, 1\}^{n} \rightarrow \{0, 1\}\) cannot be computed on more than a fraction 1 − δ of inputs by any deterministic time T and space S algorithm, where δ ≤ 1/t for some t. Then for t-step walks w = (v 1, . . . , v t ) in some explicit d-regular expander graph on 2n vertices, the function \(g^\prime(w) {\mathop = \limits ^{\rm def}} (g(v_1), . . . , g(v_t))\) cannot be computed on more than a fraction 1 − Ω(tδ) of inputs by any deterministic time ≈ T/d t − poly(n) and space ≈ S − O(t) algorithm. As an application, by iterating this construction, we get a deterministic linear-space “worst-case to constant average-case” hardness amplification reduction, as well as a family of logspace encodable/decodable error-correcting codes that can correct up to a constant fraction of errors. Logspace encodable/decodable codes (with linear-time encoding and decoding) were previously constructed by Spielman (1996). Our codes have weaker parameters (encoding length is polynomial, rather than linear), but have a conceptually simpler construction. The proof of our Direct Product lemma is inspired by Dinur’s remarkable proof of the PCP theorem by gap amplification using expanders (Dinur 2006).
Similar content being viewed by others
Author information
Authors and Affiliations
Corresponding author
Additional information
Manuscript received 8 February 2006
Rights and permissions
About this article
Cite this article
Guruswami, V., Kabanets, V. Hardness Amplification via Space-Efficient Direct Products. comput. complex. 17, 475–500 (2008). https://doi.org/10.1007/s00037-008-0253-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00037-008-0253-1