Skip to main content

Compressed Learning with Regular Concept

  • Conference paper
  • 1153 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6331))

Abstract

We revisit compressed learning in the PAC learning framework. Specifically, we derive error bounds for learning halfspace concepts with compressed data. We propose the regularity assumption over a pair of concept and data distribution to greatly generalize former assumptions. For a regular concept we define a robust factor to characterize the margin distribution and show that such a factor tightly controls the generalization error of a learned classifier. Moreover, we extend our analysis to the more general linearly non-separable case. Empirical results on both toy and real world data validate our analysis.

Supported by NSFC (Grant No. 60975003) and State Key Science and Technology Project on Marine Carbonate Reservoir Characterization (2008ZX05004-006).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Arriaga, R.I., Vempala, S.: An algorithmic theory of learning: Robust concepts and random projection. In: FOCS 1999: Proc. of the 40th Foundations of Computer Science (1999)

    Google Scholar 

  2. Calderbank, R., Jafarpour, S., Schapire, R.: Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain. Technical report, Princeton University (2009)

    Google Scholar 

  3. Dasgupta, S., Gupta, A.: An elementary proof of a theorem of johnson and lindenstrauss. Random Structures and Algorithms 22(1), 60–65 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  4. Freund, Y., Dasgupta, S., Kabra, M., Verma, N.: Learning the structure of manifolds using random projections. In: Advances in Neural Information Processing Systems, vol. 20, pp. 473–480. MIT Press, Cambridge (2008)

    Google Scholar 

  5. Freund, Y., Schapire, R.E.: Large margin classification using the perceptron algorithm. Mach. Learn. 37(3), 277–296 (1999)

    Article  MATH  Google Scholar 

  6. Garg, A., Har-Peled, S., Roth, D.: On generalization bounds, projection profile, and margin distribution. In: ICML 2002: Proceedings of the Nineteenth International Conference on Machine Learning, pp. 171–178 (2002)

    Google Scholar 

  7. Hegde, C., Wakin, M., Baraniuk, R.: Random projections for manifold learning. In: Advances in Neural Information Processing Systems, vol. 20, pp. 641–648. MIT Press, Cambridge (2008)

    Google Scholar 

  8. Johnson, W., Lindenstrauss, J.: Extensions of lipschitz maps into a hilbert space. Contemp. Math. 26, 189–206 (1984)

    MATH  MathSciNet  Google Scholar 

  9. Liu, K., Ryan, J.: Random projection-based multiplicative data perturbation for privacy preserving distributed data mining. IEEE Trans. on Knowl. and Data Eng. 18(1), 92–106 (2006); Senior Member-Kargupta, Hillol

    Article  Google Scholar 

  10. Maillard, O., Munos, R.: Compressed least-squares regression. In: Advances in Neural Information Processing Systems, vol. 21. MIT Press, Cambridge (2009)

    Google Scholar 

  11. Tsybakov, A.B.: Optimal aggregation of classifiers in statistical learning. Ann. Statist. 32, 135–166 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  12. Wang, F., Li, P.: Efficient non-negative matrix factorization with random projections. In: The 10th SIAM International Conference on Data Mining, pp. 281–292 (2010)

    Google Scholar 

  13. Zhou, S., Lafferty, J., Wasserman, L.: Compressed regression. In: Advances in Neural Information Processing Systems, vol. 20, pp. 1713–1720. MIT Press, Cambridge (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lv, J., Zhang, J., Wang, F., Wang, Z., Zhang, C. (2010). Compressed Learning with Regular Concept. In: Hutter, M., Stephan, F., Vovk, V., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2010. Lecture Notes in Computer Science(), vol 6331. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16108-7_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16108-7_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16107-0

  • Online ISBN: 978-3-642-16108-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics