Abstract
Machine learning has been one of the important subjects of AI that is motivated by many real world applications. In theoretical computer science, researchers also have introduced mathematical frameworks for investigating machine learning, and in these frameworks, many interesting results have been obtained. Now we are proceeding to a new stage to study how to apply these fruitful theoretical results to real problems. We point out in this paper that “adaptivity” is one of the important issues when we consider applications of learning techniques, and we propose one learning algorithm with this feature.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
C. Domingo, R. Gavaldà, and O. Watanabe, Practical algorithms for on-line selection, in Proc. of the First Int’;l Conference on Discovery Science, DS’98, Lecture Notes in Artificial Intelligence 1532:150–161, 1998.
C. Domingo, R. Gavaldà, and O. Watanabe, Adaptive sampling methods for scaling up knowledge discovery algorithms, Technical Report C-131, Dept. of Math. and Computing Sciences, Tokyo Institute of Technology, 1999.
Y. Freund, Boosting a weak learning algorithm by majority, Information and Computation, 121(2):256–285, 1995.
Y. Freund and R.E. Schapire, Experiments with a new boosting algorithm, in Machine Learning: Proc. of the 13th Int’l Conference, 148–156, 1996.
Y. Freund and R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119–139, 1997.
M.J. Kearns and L.G. Valiant, Cryptographic limitations on learning boolean formulae and finite automata, J. Assoc. Comput. Mach., 41(1):67–95, 1994.
M.J. Kearns and U.V. Vazirani, An Introduction to Computational Learning Theory, Cambridge University Press, 1994.
Richard J. Lipton and Jeffrey F. Naughton, Query size estimation by adaptive sampling, Journal of Computer and System Science, 51:18–25, 1995.
R.J. Lipton, J.F. Naughton, D.A. Schneider, and S. Seshadri, Efficient sampling strategies for relational database operations, Theoretical Computer Science, 116:195–226, 1993.
J.R. Quinlan, Bagging, boosting, and C4.5, in Proc. of the 13th National Conference on Artificial Intelligence, 725–730, 1996.
R. Reischuk and T. Zeugmann, A complete and tight average-case analysis of learning monomial, in Proc. 16th Int’l Sympos. on Theoretical Aspects of Computer Science, STACS’99, 1999, to appear.
J. Shawe-Taylor, P.L. Bartlett, R.C. Williamson, and M. Anthony, Structural risk minimization over data-dependent hierarchies, IEEE Trans. Information Theory, 44(5):1926–1940, 1998.
R.E. Schapire, The strength of weak learnability, Machine Learning, 5(2):197–227, 1990.
R.E. Schapire, Theoretical views of boosting, in Computational Learning Theory: Proc. of the 4th European Conference, EuroCOLT’99, 1999, to appear.
L. Valiant, A theory of the learnable, Communications of the ACM, 27(11):1134–1142, 1984.
A. Wald, Sequential Analysis, Wiley Mathematical, Statistics Series, 1947.
T. Zeugmann, Lange and Wiehagen’s pattern language learning algorithm: an average-case analysis with respect to its total learning time, Annals of Math. and Artificial Intelligence, 23(1-2):117–145, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Watanabe, O. (1999). From Computational Learning Theory to Discovery Science. In: Wiedermann, J., van Emde Boas, P., Nielsen, M. (eds) Automata, Languages and Programming. Lecture Notes in Computer Science, vol 1644. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48523-6_11
Download citation
DOI: https://doi.org/10.1007/3-540-48523-6_11
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-66224-2
Online ISBN: 978-3-540-48523-0
eBook Packages: Springer Book Archive