Skip to main content

Analysis of Perceptron-Based Active Learning

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3559))

Abstract

We start by showing that in an active learning setting, the Perceptron algorithm needs \(\Omega(\frac{1}{\epsilon ^{2}})\)labels to learn linear separators within generalization error ε. We then present a simple selective sampling algorithm for this problem, which combines a modification of the perceptron update with an adaptive filtering rule for deciding which points to query. For data distributed uniformly over the unit sphere, we show that our algorithm reaches generalization error ε after asking for just \({\tilde O}(d log \frac{1}{\epsilon})\) labels. This exponential improvement over the usual sample complexity of supervised learning has previously been demonstrated only for the computationally more complex query-by-committee algorithm.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Angluin, D.: Queries revisited. In: Abe, N., Khardon, R., Zeugmann, T. (eds.) ALT 2001. LNCS (LNAI), vol. 2225, pp. 12–31. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  2. Baum, E.B.: The perceptron algorithm is fast for nonmalicious distributions. Neural Computation 2, 248–260 (1997)

    Article  Google Scholar 

  3. Blum, A., Frieze, A., Kannan, R., Vempala, S.: A polynomial-time algorithm for learning noisy linear threshold functions. In: Proc. 37th IEEE Symposium on the Foundations of Computer Science (1996)

    Google Scholar 

  4. Cesa-Bianchi, N., Gentile, C., Zaniboni, L.: Worst-case analysis of selective sampling for linear-threshold algorithms. In: Advances in Neural Information Processing Systems, vol. 17 (2004)

    Google Scholar 

  5. Dasgupta, S.: Analysis of a greedy active learning strategy. In: Advances in Neural Information Processing Systems, vol. 17 (2004)

    Google Scholar 

  6. Fine, S., Gilad-Bachrach, R., Shamir, E.: Query by committee, linear separation and random walks. Theoretical Computer Science 284(1), 25–51 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  7. Freund, Y., Seung, H.S., Shamir, E., Tishby, N.: Selective sampling using the query by committee algorithm. Machine Learning 28(2-3), 133–168 (1997)

    Article  MATH  Google Scholar 

  8. Gilad-Bachrach, R., Navot, A., Tishby, N.: Kernel query by committee (KQBC). Technical Report 2003-88, Leibniz Center, the Hebrew University (2003)

    Google Scholar 

  9. Lewis, D.D., Gale, W.A.: A sequential algorithm for training text classifiers. In: Proc. of SIGIR-94, 17th ACM International Conference on Research and Development in Information Retrieval (1994)

    Google Scholar 

  10. Long, P.M.: On the sample complexity of PAC learning halfspaces against the uniform distribution. IEEE Transactions on Neural Networks 6(6), 1556–1559 (1995)

    Article  Google Scholar 

  11. Long, P.M.: An upper bound on the sample complexity of PAC learning halfspaces with respect to the uniform distribution. Information Processing Letters 87(5), 229–234 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  12. Servedio, R.A.: On PAC learning using winnow, perceptron, and a perceptron-like algorithm. In: Computational Learning Theory, pp. 296–307 (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dasgupta, S., Kalai, A.T., Monteleoni, C. (2005). Analysis of Perceptron-Based Active Learning. In: Auer, P., Meir, R. (eds) Learning Theory. COLT 2005. Lecture Notes in Computer Science(), vol 3559. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11503415_17

Download citation

  • DOI: https://doi.org/10.1007/11503415_17

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26556-6

  • Online ISBN: 978-3-540-31892-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics