Skip to main content

Reflecting and self-confident inductive inference machines

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 997))

Abstract

Reflection denotes someones activity of thinking about oneself as well as about one's relation to the outside world. In particular, reflecting means pondering about ones capabilities and limitations. Reasoning about ones competence is a central issue of reflective behaviour. Reflection is a key issue of recent artificial intelligence.

There is investigated the problem of automated reasoning about the competence of inductive inference machines. Reflective inductive inference machines are those which are able to identify whether or not some information presented exceeds its learning capabilities. An inductive inference machine is self-confident, if it usually trusts in its ability to solve the learning problem on hand. It is reflecting and self-confident, if it normally believes in its power, but recognizes problems exceeding its competence. The problem is formalized and studied within the setting of inductively learning total recursive functions. There is a crucial distinction of immediately reflecting inductive inference machines and those which need an a priori unknown amount of time for reasoning about its competence.

The core result is a characterization of problem classes solvable by reflective inductive inference machines. Roughly speaking, for a given problem class \(U \subseteq \mathcal{R}\), one may develop a reflecting and self-confident inductive inference machine, if and only if the development of such a machine is not necessary at all, as the problem class can be reasonably extended such that reflection turns out to be unnecessary. A derived result exhibits that, in contrast to intuition, there is no difference in power between reflecting and immediately reflecting inductive inference machines.

The ultimate goal of the present paper is to contribute to a better understanding of the reflection problem in artificial intelligence. The present paper is intended to be a launching pad for this endeavor.

The work has been partially supported by the German Federal Ministry for Research and Technology (BMFT) within the Joint Project (BMFT-Verbundprojekt) GOSLER on Algorithmic Learning for Knowledge-Based Systems under contract no. 413-4001-01 IW 101 A. A preliminary version of the approach and some basic results has been printed as Gosler Report. # 24/94, September 1994.

The author gratefully acknowledges fruitful discussions both about the problem of reflective system's behaviour, in general, and the approach presented, in particular, with Oksana Arnold, Gunter Grieser, and Steffen Lange. In particular, several discussions with Gunter Grieser are partially reflected in the list of research problems presented above. Anonymous referees provided very helpful criticism.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dana Angluin. Computational learning theory: Survey and selected bibliography. In ACM Symposium on Theory of Computing, STOC'92, pages 351–368. ACM Press, 1992.

    Google Scholar 

  2. Dana Angluin and Carl H. Smith. A survey of inductive inference: Theory and methods. Computing Surveys, 15:237–269, 1983.

    Article  Google Scholar 

  3. Leonore Blum and Manuel Blum. Toward a mathematical theory of inductive inference. Information and Control, 28:125–155, 1975.

    Article  Google Scholar 

  4. Manuel Blum. A machine-independent theory of the complexity of recursive functions. Journal of the ACM, 14:322–336, 1967.

    Article  Google Scholar 

  5. John Case. Periodicity in generations of automata. Mathematical Systems Theory, 8(1):15–32, 1974.

    Article  Google Scholar 

  6. John Case. Infinitary self-reference in learning theory. Journal on Experimental and Theoretical AI, 6(1):3–16, 1994.

    Google Scholar 

  7. E Mark Gold. Language identification in the limit. Information and Control, 14:447–474, 1967.

    Article  Google Scholar 

  8. Klaus P. Jantke. Algorithmic learning from incomplete information: Principles and problems. In J. Dassow and J. Kelemen, editors, Machines, Languages, and Complexity, Lecture Notes in Computer Science, pages 188–207. Springer-Verlag, 1989.

    Google Scholar 

  9. Klaus P. Jantke. Monotonic and non-monotonic inductive inference. New Generation Computing, 8(4):349–360, 1991.

    Google Scholar 

  10. Klaus P. Jantke. Monotonic and nonmonotonic inductive inference of functions and patterns. In K.P. Jantke J. Dix and P.H. Schmitt, editors, Nonmonotonic and Inductive Logic, 1st Int. Workshop, Lecture Notes in Artificial Intelligence 543, pages 161–177. Springer-Verlag, 1991.

    Google Scholar 

  11. Klaus P. Jantke. Case based learning in inductive inference. In Proc. 5th Annual ACM Workshop on Computational Learning Theory, (COLT'92), July 27–29, 1992, Pittsburgh, PA, USA, pages 218–223. ACM Press, 1992.

    Google Scholar 

  12. Klaus P. Jantke. Towards reflecting inductive inference machines. GOSLER Report 24/93, HTWK Leipzig (FH), FB Informatik, Mathematik & Naturwissenschaften, September 1994.

    Google Scholar 

  13. Klaus P. Jantke and Hans-Rainer Beick. Combining postulates of naturalness in inductive inference. EIK, 17(8/9):465–484, 1981.

    Google Scholar 

  14. Klaus P. Jantke and Steffen Lange. Algorithmisches Lernen. In K.P. Jantke J. Grabowski and H. Thiele, editors, Grundlagen der Künstlichen Intelligenz, pages 246–277. Akademie-Verlag Berlin, 1989.

    Google Scholar 

  15. Klaus P. Jantke and Steffen Lange. Case-based representation and learning of pattern languages. TCS, 137(1):25–51, 1995.

    Google Scholar 

  16. Werner Karbach. MODEL-K: Modellierung und Operationalisierung von Selbsteinschätzung und-steuerung durch Reflexion und Metawissen, volume 57 of DISKI, Dissertationen zur Künstlichen Intelligenz. infix, 1994.

    Google Scholar 

  17. Reinhard Klette and Rolf Wiehagen. Research in the theory of inductive inference by GDR mathematicians — a survey. Information Sciences, 22:149–169, 1980.

    Google Scholar 

  18. Efim Kinber and Thomas Zeugmann. Inductive inference of almost everywhere correct programs by reliably working strategies. Elektron. Inf.verarb. Kybern. (Journal of Information Processing and Cybernetics), 21(3):91–100, 1985.

    Google Scholar 

  19. Steffen Lange and Phil R. Watson. Machine discovery in the presence of incomplete or ambiguous data. In Setsuo Arikawa and K.P. Jantke, editors, Algorithmic Learning Theory including Analogical and Inductive Inference, AII'94 & ALT'94, LNAI, pages 439–453. Springer-Verlag, 1994.

    Google Scholar 

  20. Yasuhito Mukouchi and Setsuo Arikawa. Inductive inference machines that can refute hypothesis spaces. In K.P. Jantke, S. Kobayashi, E. Tomita, and T. Yokomori, editors, Proc. 4th Workshop on Algorithmic Learning Theory, (ALT'93), November 8–10, 1993, Tokyo, volume 744 of Lecture Notes in Artificial Intelligence, pages 123–136. Springer-Verlag, 1993.

    Google Scholar 

  21. Eliana Minicozzi. Some natural properties of strong identification in inductive inference. Theoretical Computer Science, 2:345–360, 1976.

    Article  Google Scholar 

  22. Michael Machtey and Paul Young. An Introduction to the General Theory of Algorithms. North-Holland, 1974.

    Google Scholar 

  23. Karl Popper. Conjectures and Refutations. Routledge and Kegan Paul, 1963.

    Google Scholar 

  24. Karl Popper. The Logic of Scientific Discovery. Harper & Row, 1965.

    Google Scholar 

  25. Hartley Rogers jr. The Theory of Recursive Functions and Effective Computability. McGraw-Hill, 1967.

    Google Scholar 

  26. Yasubumi Sakakibara, Klaus P. Jantke, and Steffen Lange. Learning languages by collecting cases and tuning parameters. In Setsuo Arikawa and K.P. Jantke, editors, Algorithmic Learning Theory including Analogical and Inductive Inference, AII'94 & ALT'94, LNAI, pages 533–547. Springer-Verlag, 1994.

    Google Scholar 

  27. Frank van Harmelen, Hans Akkermann, Brigitte Bartsch-Spörl, Bert Bredeweg, Carl-Helmut Coulon, and Uwe Drouven. Knowledge-level reflection: Specifications and architectures. Esprit Basic Research Project P3178 REFLECT Report, University of Amsterdam, June 1992.

    Google Scholar 

  28. Frank van Harmelen and Bob Wielinga. Knowledge-level reflection. In B. LePape and L. Steels, editors, Enhancing the Knowledge Engineering Process, pages 175–204. Elsevier Science Publ., 1992.

    Google Scholar 

  29. Rolf Wiehagen. From inductive inference to algorithmic learning theory. In S. Doshita, K. Furukawa, K.P. Jantke, and T. Nishida, editors, Proc. 3rd Workshop on Algorithmic Learning Theory, (ALT'92), October 20–32, 1992, Tokyo, volume 743 of Lecture Notes in Artificial Intelligence, pages 13–24. Springer-Verlag, 1992.

    Google Scholar 

  30. Thomas Zeugmann. A-posteriori characterizations in inductive inference of recursive functions. Elektron. Inf.verarb. Kybern. (Journal of Information Processing and Cybernetics), 19(10/11):559–594, 1983.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Klaus P. Jantke Takeshi Shinohara Thomas Zeugmann

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jantke, K.P. (1995). Reflecting and self-confident inductive inference machines. In: Jantke, K.P., Shinohara, T., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 1995. Lecture Notes in Computer Science, vol 997. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60454-5_46

Download citation

  • DOI: https://doi.org/10.1007/3-540-60454-5_46

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60454-9

  • Online ISBN: 978-3-540-47470-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics