Skip to main content

Efficiently Approximating Weighted Sums with Exponentially Many Terms

  • Conference paper
  • First Online:
Computational Learning Theory (COLT 2001)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2111))

Included in the following conference series:

Abstract

We explore applications of Markov chain Monte Carlo methods for weight estimation over inputs to the Weighted Majority (WM) and Winnow algorithms. This is useful when there are exponentially many such inputs and no apparent means to efficiently compute their weighted sum. The applications we examine are pruning classifier ensembles using WM and learning general DNF formulas using Winnow. These uses require exponentially many inputs, so we define Markov chains over the inputs to approximate the weighted sums. We state performance guarantees for our algorithms and present preliminary empirical results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Blum, P. Chalasani, and J. Jackson. On learning embedded symmetric concepts. In Proc. 6th Annu. Workshop on Comput. Learning Theory, pages 337–346. ACM Press, New York, NY, 1993.

    Google Scholar 

  2. A. Blum, M. Furst, J. Jackson, M. Kearns, Y. Mansour, and S. Rudich. Weakly learning DNF and characterizing statistical query learning using fourier analysis. In Proceedings of Twenty-sixth ACM Symposium on Theory of Computing, 1994.

    Google Scholar 

  3. N. Cesa-Bianchi, Y. Freund, D. Helmbold, D. Haussler, R. Schapire, and M. Warmuth. How to use expert advice. J. of the ACM, 44(3):427–485, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  4. N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, 2000.

    Google Scholar 

  5. M. Dyer, A. Frieze, R. Kannan, A. Kapoor, and U. Vazirani. A mildly exponential time algorithm for approximating the number of solutions to a multidimensional knapsack problem. Combinatorics, Prob. and Computing, 2:271–284, 1993.

    MATH  MathSciNet  Google Scholar 

  6. S.A. Goldman, S.K. Kwek, and S.D. Scott. Agnostic learning of geometric patterns. Journal of Computer and System Sciences, 6(1):123–151, February 2001.

    Article  MathSciNet  Google Scholar 

  7. S.A. Goldman and S.D. Scott. Multiple-instance learning of real-valued geometric patterns. Annals of Mathematics and Artificial Intelligence, to appear. Early version in technical report UNL-CSE-99-006, University of Nebraska.

    Google Scholar 

  8. D.P. Helmbold and R.E. Schapire. Predicting nearly as well as the best pruning of a decision tree. Machine Learning, 27(1):51–68, 1997.

    Article  Google Scholar 

  9. M. Jerrum and A. Sinclair. The Markov chain Monte Carlo method: An approach to approximate counting and integration. In D. Hochbaum, editor, Approximation Algorithms for NP-Hard Problems, chapter 12, pages 482–520. PWS Pub., 1996.

    Google Scholar 

  10. M.R. Jerrum, L.G. Valiant, and V.V. Vazirani. Random generation of combinatorial structures from a uniform distribution. Theoretical Computer Science, 43:169–188, 1986.

    Article  MATH  MathSciNet  Google Scholar 

  11. N. Littlestone. Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning, 2:285–318, 1988.

    Google Scholar 

  12. N. Littlestone. Redundant noisy attributes, attribute errors, and linear threshold learning using Winnow. In Proc. 4th Annu. Workshop on Comput. Learning Theory, pages 147–156, San Mateo, CA, 1991. Morgan Kaufmann.

    Google Scholar 

  13. N. Littlestone and M.K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212–261, 1994.

    Article  MATH  MathSciNet  Google Scholar 

  14. W. Maass and M.K. Warmuth. Efficient learning with virtual threshold gates. Information and Computation, 141(1):66–83, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  15. D.D. Margineantu and T.G. Dietterich. Pruning adaptive boosting. In Proc. 14th International Conference on Machine Learning, pages 211–218. Morgan Kaufmann, 1997.

    Google Scholar 

  16. N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller. Equation of state calculation by fast computing machines. J. of Chemical Physics, 21:1087–1092, 1953.

    Article  Google Scholar 

  17. B. Morris and A. Sinclair. Random walks on truncated cubes and sampling 0-1 knapsack solutions. In Proc. of 40th Symp. on Foundations of Comp. Sci., 1999.

    Google Scholar 

  18. F. Pereira and Y. Singer. An efficient extension to mixture techniques for prediction and decision trees. Machine Learning, 36(3):183–199, September 1999.

    Article  MATH  Google Scholar 

  19. R.E. Schapire and Y. Singer. Improved boosting algorithms using confidencerated predictions. Machine Learning, 38(3):297–336, 1999.

    Article  Google Scholar 

  20. E. Takimoto and M. Warmuth. Predicting nearly as well as the best pruning of a planar decision graph. In Proc. of the Tenth International Conference on Algorithmic Learning Theory, 1999.

    Google Scholar 

  21. C. Tamon and J. Xiang. On the boosting pruning problem. In Proceedings of the Eleventh European Conference on Machine Learning, pages 404–412, 2000.

    Google Scholar 

  22. L.G. Valiant. The complexity of enumeration and reliability problems. SIAM Journal of Computing, 8:410–421, 1979.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Chawla, D., Li, L., Scott, S. (2001). Efficiently Approximating Weighted Sums with Exponentially Many Terms. In: Helmbold, D., Williamson, B. (eds) Computational Learning Theory. COLT 2001. Lecture Notes in Computer Science(), vol 2111. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44581-1_6

Download citation

  • DOI: https://doi.org/10.1007/3-540-44581-1_6

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42343-0

  • Online ISBN: 978-3-540-44581-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics