Skip to main content

Separating Models of Learning from Correlated and Uncorrelated Data

  • Conference paper
Learning Theory (COLT 2005)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3559))

Included in the following conference series:

  • 3668 Accesses

Abstract

We consider a natural framework of learning from correlated data, in which successive examples used for learning are generated according to a random walk over the space of possible examples. Previous research has suggested that the Random Walk model is more powerful than comparable standard models of learning from independent examples, by exhibiting learning algorithms in the Random Walk framework that have no known counterparts in the standard model. We give strong evidence that the Random Walk model is indeed more powerful than the standard model, by showing that if any cryptographic one-way function exists (a universally held belief in public key cryptography), then there is a class of functions that can be learned efficiently in the Random Walk setting but not in the standard setting where all examples are independent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Aldous, D., Vazirani, U.: A Markovian extension of Valiant’s learning model. In: Proceedings of the Thirty-First Symposium on Foundations of Computer Science, pp. 392–396 (1990)

    Google Scholar 

  2. Bartlett, P., Fischer, P., Höffgen, K.U.: Exploiting random walks for learning. Information and Computation 176(2), 121–135 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  3. Blum, A.: Learning a function of r relevant variables (open problem). In: Proceedings of the 16th Annual Conference on Learning Theory and 7th Kernel Workshop, pp. 731–733 (2003)

    Google Scholar 

  4. Blum, A., Furst, M., Jackson, J., Kearns, M., Mansour, Y., Rudich, S.: Weakly learning DNF and characterizing statistical query learning using Fourier analysis. In: Proceedings of the Twenty-Sixth Annual Symposium on Theory of Computing, pp. 253–262 (1994)

    Google Scholar 

  5. Bshouty, N., Jackson, J., Tamon, C.: More efficient PAC learning of DNF with membership queries under the uniform distribution. In: Proceedings of the Twelfth Annual Conference on Computational Learning Theory, pp. 286–295 (1999)

    Google Scholar 

  6. Bshouty, N., Mossel, E., O’Donnell, R., Servedio, R.: Learning DNF from Random Walks. In: Proceedings of the 44th IEEE Symposium on Foundations on Computer Science, pp. 189–198 (2003)

    Google Scholar 

  7. Bshouty, N., Tamon, C.: On the Fourier spectrum of monotone functions. Journal of the ACM 43(4), 747–770 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  8. Gamarnik, D.: Extension of the PAC framework to finite and countable Markov chains. In: Proceedings of the 12th Annual Conference on Computational Learning Theory, pp. 308–317 (1999)

    Google Scholar 

  9. Goldreich, O.: Foundations of Cryptography. In: Basic Tools, vol. 1, Cambridge University Press, New York (2001)

    Google Scholar 

  10. Goldreich, O., Goldwasser, S., Micali, S.: How to construct random functions. Journal of the Association for Computing Machinery 33(4), 792–807 (1986)

    MathSciNet  Google Scholar 

  11. Hastad, J., Impagliazzo, R., Levin, L., Luby, M.: A pseudorandom generator from any one-way function. SIAM Journal on Computing 28(4), 1364–1396 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  12. Jackson, J.: An efficient membership-query algorithm for learning DNF with respect to the uniform distribution. Journal of Computer and System Sciences 55, 414–440 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  13. Jackson, J., Klivans, A., Servedio, R.: Learnability beyond AC 0. In: Proceedings of the 34th ACM Symposium on Theory of Computing (2002)

    Google Scholar 

  14. Kharitonov, M.: Cryptographic hardness of distribution-specific learning. In: Proceedings of the Twenty-Fifth Annual Symposium on Theory of Computing, pp. 372–381 (1993)

    Google Scholar 

  15. Linial, N., Mansour, Y., Nisan, N.: Constant depth circuits, Fourier transform and learnability. Journal of the ACM 40(3), 607–620 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  16. Valiant, L.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)

    Article  MATH  Google Scholar 

  17. Verbeurgt, K.: Learning DNF under the uniform distribution in quasi-polynomial time. In: Proceedings of the Third Annual Workshop on Computational Learning Theory, pp. 314–326 (1990)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Elbaz, A., Lee, H.K., Servedio, R.A., Wan, A. (2005). Separating Models of Learning from Correlated and Uncorrelated Data. In: Auer, P., Meir, R. (eds) Learning Theory. COLT 2005. Lecture Notes in Computer Science(), vol 3559. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11503415_43

Download citation

  • DOI: https://doi.org/10.1007/11503415_43

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26556-6

  • Online ISBN: 978-3-540-31892-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics