Skip to main content

From Computational Learning Theory to Discovery Science

  • Conference paper
  • First Online:
Automata, Languages and Programming

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1644))

Abstract

Machine learning has been one of the important subjects of AI that is motivated by many real world applications. In theoretical computer science, researchers also have introduced mathematical frameworks for investigating machine learning, and in these frameworks, many interesting results have been obtained. Now we are proceeding to a new stage to study how to apply these fruitful theoretical results to real problems. We point out in this paper that “adaptivity” is one of the important issues when we consider applications of learning techniques, and we propose one learning algorithm with this feature.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. C. Domingo, R. Gavaldà, and O. Watanabe, Practical algorithms for on-line selection, in Proc. of the First Int’;l Conference on Discovery Science, DS’98, Lecture Notes in Artificial Intelligence 1532:150–161, 1998.

    Google Scholar 

  2. C. Domingo, R. Gavaldà, and O. Watanabe, Adaptive sampling methods for scaling up knowledge discovery algorithms, Technical Report C-131, Dept. of Math. and Computing Sciences, Tokyo Institute of Technology, 1999.

    Google Scholar 

  3. Y. Freund, Boosting a weak learning algorithm by majority, Information and Computation, 121(2):256–285, 1995.

    Article  MathSciNet  Google Scholar 

  4. Y. Freund and R.E. Schapire, Experiments with a new boosting algorithm, in Machine Learning: Proc. of the 13th Int’l Conference, 148–156, 1996.

    Google Scholar 

  5. Y. Freund and R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119–139, 1997.

    Article  MathSciNet  Google Scholar 

  6. M.J. Kearns and L.G. Valiant, Cryptographic limitations on learning boolean formulae and finite automata, J. Assoc. Comput. Mach., 41(1):67–95, 1994.

    Article  MathSciNet  Google Scholar 

  7. M.J. Kearns and U.V. Vazirani, An Introduction to Computational Learning Theory, Cambridge University Press, 1994.

    Google Scholar 

  8. Richard J. Lipton and Jeffrey F. Naughton, Query size estimation by adaptive sampling, Journal of Computer and System Science, 51:18–25, 1995.

    Article  MathSciNet  Google Scholar 

  9. R.J. Lipton, J.F. Naughton, D.A. Schneider, and S. Seshadri, Efficient sampling strategies for relational database operations, Theoretical Computer Science, 116:195–226, 1993.

    Article  MathSciNet  Google Scholar 

  10. J.R. Quinlan, Bagging, boosting, and C4.5, in Proc. of the 13th National Conference on Artificial Intelligence, 725–730, 1996.

    Google Scholar 

  11. R. Reischuk and T. Zeugmann, A complete and tight average-case analysis of learning monomial, in Proc. 16th Int’l Sympos. on Theoretical Aspects of Computer Science, STACS’99, 1999, to appear.

    Google Scholar 

  12. J. Shawe-Taylor, P.L. Bartlett, R.C. Williamson, and M. Anthony, Structural risk minimization over data-dependent hierarchies, IEEE Trans. Information Theory, 44(5):1926–1940, 1998.

    Article  MathSciNet  Google Scholar 

  13. R.E. Schapire, The strength of weak learnability, Machine Learning, 5(2):197–227, 1990.

    Google Scholar 

  14. R.E. Schapire, Theoretical views of boosting, in Computational Learning Theory: Proc. of the 4th European Conference, EuroCOLT’99, 1999, to appear.

    Google Scholar 

  15. L. Valiant, A theory of the learnable, Communications of the ACM, 27(11):1134–1142, 1984.

    Article  Google Scholar 

  16. A. Wald, Sequential Analysis, Wiley Mathematical, Statistics Series, 1947.

    Google Scholar 

  17. T. Zeugmann, Lange and Wiehagen’s pattern language learning algorithm: an average-case analysis with respect to its total learning time, Annals of Math. and Artificial Intelligence, 23(1-2):117–145, 1998.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Watanabe, O. (1999). From Computational Learning Theory to Discovery Science. In: Wiedermann, J., van Emde Boas, P., Nielsen, M. (eds) Automata, Languages and Programming. Lecture Notes in Computer Science, vol 1644. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48523-6_11

Download citation

  • DOI: https://doi.org/10.1007/3-540-48523-6_11

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66224-2

  • Online ISBN: 978-3-540-48523-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics