Skip to main content

The central role of explanations in disciple

  • Chapter
  • First Online:
Knowledge Representation and Organization in Machine Learning

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 347))

Abstract

DISCIPLE is a Knowledge Acquisition system that contains several learning mechanisms as recognized by Machine Learning. The central mechanism in DICIPLE is the one of explanations which is used in all the learning modes of DISCIPLE.

When using the Explanation-Based mode of learning, an explanation points at the most relevant features of the examples.

When using the Analogy-Based mode of learning, the explanations are used to generate instances analogous to those provided by the user.

When using the Similarity-Based mode of learning, the explanations are "examples" among which similarities are looked for.

The final result of DISCIPLE is the description of the validity domain of the variables contained in the rules. Since the users always provides totally instantiated rules, the system must automatically variabilize them, and then must find the validity domain of these variables by asking "clever" questions to the user. Given a particular (instantiated) rule by its user, the system will look in its Knowledge Base for possible explanations of this rule, and ask the user to validate them. The set of explanations validated by the user is then used as a set of (almost) sufficient conditions for the application of the instantiated rule.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Burstein, M. H. (1986). Concept formation by incremental analogical reasoning and debugging. In R. S. Michalski, J. G. Carbonell, T. M. Mitchell (Eds.), Machine learning: An artificial intelligence approach (Vol. 2). Los Altos, CA: Morgan Kaufmann.

    Google Scholar 

  • Carbonell, J. G. (1986). Derivational analogy: a theory of reconstructive problem solving and expertise acquisition. In R. S. Michalski, J. G. Carbonell, T. M. Mitchell (Eds.), Machine learning: An artificial intelligence approach (Vol. 2). Los Altos, CA: Morgan Kaufmann.

    Google Scholar 

  • Cohen, B., & Sammut, C. (1984). Program synthesis through concept learning. In A. W. Biermann, G. Guiho, & Y. Kodratoff (Eds.), Automatic program construction techniques. Macmillan Publishing Company.

    Google Scholar 

  • DeJong, G. F., (1981). Generalizations based on explanations, In Proceedings of the Seventh International Joint Conference on Artificial Intelligence (pp. 67–70). Vancouver, British Columbia, Canada: Morgan Kaufmann.

    Google Scholar 

  • DeJong, G. F., & Mooney, R. (1986). Explanation-based learning: an alternative view. Machine Learning 1, 145–176.

    Google Scholar 

  • Kedar-Cabelli, S. T. (1985). Purpose-directed analogy. In Proceedings of the Cognitive Science Society (pp. 150–159). Irvine, CA.

    Google Scholar 

  • Kodratoff, Y. (1983). Generalizing and particularizing as the techniques of learning. Computers and Artificial Intelligence, 4, 417–441.

    Google Scholar 

  • Kodratoff Y., & Ganascia J-G. (1986). Improving the generalization step in learning, In R. S. Michalski, J. G. Carbonell, T. M. Mitchell (Eds.), Machine learning: An artificial intelligence approach (Vol. 2). Los Altos, CA: Morgan Kaufmann.

    Google Scholar 

  • Kodratoff Y. (1986). Learning expert knowledge and theorem proving. In C-R. Rollinger & W. Horn (Eds.) GWAI-86 und 2. osterreichische artificial-intelligence-tagung. Berlin: Springer-Verlag.

    Google Scholar 

  • Kodratoff, Y., & Tecuci, G. (1987). Techniques of design and DISCIPLE learning apprentice. International Journal of Expert Systems, 1, 39–66.

    Google Scholar 

  • Lebowitz M. (1986). Integrated learning: controlling explanation. Cognitive Science, 10.

    Google Scholar 

  • Michalski, R. S. (1983). A theory and a methodology of inductive learning. Artificial Intelligence, 20, 111–161.

    Article  Google Scholar 

  • Michalski, R. S., Mozetic, I., Hong, J., & Lavrac, N. (1986). The AQ15 inductive learning system: An overview and experiments. (Internal Paper). Univ. of Illinois at Urbana-Champaign.

    Google Scholar 

  • Mitchell T. M. (1978). Version spaces: An approach to concept learning. Doctorial dissertation, Department of Electrical Engineering, Stanford University, Stanford, CA.

    Google Scholar 

  • Mitchell, T. M., Mahadevan, S., & Steinberg, L. I. (1985). LEAP: a learning apprentice system for VLSI design. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (pp. 573–580). Los Angeles, CA: Morgan Kaufmann.

    Google Scholar 

  • Mitchell T. M., Keller R. M., & Kedar-Cabelli S. T. (1986). Explanation-based generalization: a unifying view, Machine Learning, 1, 47–80.

    Google Scholar 

  • Morik K. (1987). Acquiring domain models. Int. J. Man-Machine Studies 26, 93–104.

    Google Scholar 

  • Mostow J., Bhatnagar N. (1987). Failsafe — A Floor Planner that Uses EBG to Learn from its Failures. In Proceedings of the Tenth International Joint Conference on Artificial Intelligence (pp. 249–255). Milan: Morgan Kaufmann.

    Google Scholar 

  • Rajamoney S., DeJong G. (1987). The classification, detection and handling of imperfect theory problems. In Proceedings of the Tenth International Joint Conference on Artificial Intelligence (pp. 205–207). Milan: Morgan Kaufmann.

    Google Scholar 

  • Russell, S. J. (1987). Analogy and single-instance generalization. In Langley, P. (Ed.), Proceedings of the Fourth International Workshop on MACHINE LEARNING. Irvine, CA: Morgan Kaufmann.

    Google Scholar 

  • Sammut C., Banerji R.B. (1986). Learning concepts by asking questions. In R. S. Michalski, J. G. Carbonell, T. M. Mitchell (Eds.), Machine learning: An artificial intelligence approach (Vol. 2). Los Altos, CA: Morgan Kaufmann.

    Google Scholar 

  • Silver, B. (1983). Learning equation solving methods from worked examples. Proceedings of the Second International Machine Learning Workshop (pp. 99–104). Urbana, IL.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Katharina Morik

Rights and permissions

Reprints and permissions

Copyright information

© 1989 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Kodratoff, Y., Tecuci, G. (1989). The central role of explanations in disciple. In: Morik, K. (eds) Knowledge Representation and Organization in Machine Learning. Lecture Notes in Computer Science, vol 347. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0017220

Download citation

  • DOI: https://doi.org/10.1007/BFb0017220

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-50768-0

  • Online ISBN: 978-3-540-46081-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics