Abstract
The burgeoning technology of machine learning is beginning to provide some insight into the nature of learning and the role of teaching in expediting the learning process. A number of systems that learn concepts and procedures from examples have been described in the research literature. In general these require a teacher who not only has an analytical understanding of the problem domain, but also is familiar with some of the internal workings of the learning system itself. This is because the learner is performing a search in concept space which is generally quite intractable, but for the teacher's selection of guiding examples.
A concept learning system's teacher must select a complete, properly ordered set of examples — one that results in a successful search by the system for an appropriate concept description. In some systems the set of examples determines whether the concept can or cannot be learned, while the order of presentation affects execution time alone. In others, both examples and presentation order are jointly responsible for success. Yet others occasionally select critical examples themselves and present them to a teacher for classification. In all cases, however, the teacher provides the primary means whereby search is pruned. Sometimes the teacher must prime the learner with considerable initial knowledge before learning can begin.
Not surprisingly, systems which demand more of the teacher are able to learn more sophisticated concepts. This paper examines the relationship between teaching requirements and learning power for current concept learning systems. We introduce concept learning by machine with emphasis on the role of the human teacher in rendering practical an otherwise intractable concept search. Machine learning has drawn many lessons from human learning and will continue to do so. In turn it can contribute more formal, if simpler, analyses of concept learning from examples.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Andreae, P.M. (1984) “Constraint limited generalization: acquiring procedures from examples” Proc American Association on Artificial Intelligence, Austin, TX, August.
Angluin, D. and Smith, C.H. (1983) “Inductive inference: theory and methods” Computing Surveys, 15 (3) 237–269, September.
Bruner, J.S., Goodnow, J.J., and Austin, G.A. (1956) A study of thinking. Wiley, New York.
Davis, R. and Lenat, D.B. (1982) Knowledge-based systems in artificial intelligence. McGraw Hill, New York, NY.
DeJong, G. and Mooney, R. (1986) “Explanation-based learning: an alternative view” Machine Learning, 1 (2) 145–176.
Dennett, D.C. (1981) Brainstorms. Harvester Press, Brighton, Sussex.
Dennett, D.C. (1987) The intentional stance. MIT Press, Cambridge, MA.
Gill, A. (1961) “State-identification experiments in finite automata” Information and Control, 4, 132–154.
Ginsburg, S. (1958) “On the length of the smallest uniform experiment which distinguishes the terminal states of a machine” J Computing Machinery, 5, 266–280.
Gold, E.M. (1967) “Language identification in the limit” Information and Control, 10, 447–474.
Haussler, D. (1987) “Learning conjunctive concepts in structural domains” Proc AAAI, 466–470.
Krawchuk, B.J. and Witten, I.H. (1988) “On asking the right questions” Proc Fifth International Conference on Machine Learning, 15–21, Ann Arbor, Michigan, June 12–14.
Krawchuk, B.J. and Witten, I.H. (in press) “Explanation-based learning: its role in problem solving” Journal of Experimental and Theoretical Artificial Intelligence, 1 (1), Also available as Research Report 88-307-19, Department of Computer Science, University of Calgary, Calgary, AL.
Lenat, D.B. (1978) “The ubiquity of discovery” Artificial Intelligence, 9, 257–285.
Lenat, D.B. and Brown, J.S. (1984) “Why AM and EURISKO appear to work” Artificial Intelligence, 23, 269–294.
Mitchell, T.M. (1982) “Generalization as search” Artificial Intelligence, 18, 203–226.
Mitchell, T.M., Keller, R.M., and Kedar-Cabelli, S.T. (1986) “Explanation-based generalization: a unifying view” Machine Learning, 1 (1) 47–80.
Moore, E.F. (1956) “Gedanken experiments on sequential machines” in Automata studies, edited by C.E. Shannon and J. McCarthy, pp 129–153. Princeton University Press, Princeton, NJ.
Sammut, C. and Banerji, R. (1983) “Hierarchical memories: an aid to concept learning” Proc International Machine Learning Workshop, 74–80, Allerton House, Monticello, IL, June 22–24.
Sammut, C. and Banerji, R. (1986) “Learning concepts by asking questions” in Machine learning Volume 2, edited by R.S. Michalski, J.G. Carbonell, and T.M. Mitchell, pp 167–191. Morgan Kaufmann Inc, Los Altos, CA.
Shapiro, E.Y. (1983) Algorithmic program debugging. MIT Press, Cambridge, MA.
Solomonoff, R.J. (1964) “A formal theory of inductive inference Parts I and II” Information and Control, 7, 1–22 and 224–254.
Van Lehn, K. (1983) “Felicity conditions for human skill acquisition: validating an AI-based theory” Research Report CIS-21, Xerox PARC, Palo Alto, CA, November.
Winston, P.H. (1975) “Learning structural descriptions from examples” in The psychology of computer vision, edited by P.H. Winston. McGraw Hill, New York, NY.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1989 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Witten, I.H., MacDonald, B.A. (1989). The other side of the coin: Teaching artificial learning systems. In: Maurer, H. (eds) Computer Assisted Learning. ICCAL 1989. Lecture Notes in Computer Science, vol 360. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-51142-3_90
Download citation
DOI: https://doi.org/10.1007/3-540-51142-3_90
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-51142-7
Online ISBN: 978-3-540-46163-0
eBook Packages: Springer Book Archive