Skip to main content

System Induction Games and Cognitive Modeling as an AGI Methodology

  • Conference paper
  • First Online:
Artificial General Intelligence (AGI 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9782))

Included in the following conference series:

  • 1432 Accesses

Abstract

We propose a methodology for using human cognition as a template for artificial generally intelligent agents that learn from experience. In particular, we consider the problem of learning certain Mealy machines from observations of their behavior; this is a general but conceptually simple learning task that can be given to humans as well as machines. We illustrate by example the sorts of observations that can be gleaned from studying human performance on this task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Recall that a Mealy machine is a finite-state machine which, at each timestep, receives an input i, produces an output \(o = f(s,i)\) (where s is its present state), and changes state to \(s' = g(s,i)\). Actually, for our purposes there is no particular reason to assume a finite state space, but the games we will consider do have this property.

  2. 2.

    In fact, the general problem is NP-complete [6].

  3. 3.

    More precisely, a SIG whose description is about the same length as the description for an integer sequence is likely to be harder to figure out than the integer sequence.

  4. 4.

    As they are based on a single subject, we should not expect our observations to generalize in all details to other subjects. This is not a problem for our purposes, since the goal is to collect a sample of some of the approaches people apply to SIGs, not to survey them completely. One other note is in order: since the author also wrote the games, some steps were taken to avoid recalling how they worked during play. The games were played some weeks after being created, had their inputs shuffled, and were drawn from a larger set of 48 games.

  5. 5.

    The − input symbol was chosen as a “null” symbol to indicate nothing of note had happened; this allows an asynchronous interaction to be modeled within a synchronous formalism.

  6. 6.

    There was some weighting, chosen for each game with the intent of making it more learnable.

  7. 7.

    In the SIG formalism, all state is hidden; what we mean by “really” hidden state is information about the state which cannot be inferred from the recent history.

  8. 8.

    The sample of situations considered was generated by taking all distinct histories (treating relabelings as equivalent) satisfying two conditions: (i) that the history contained mixed evidence and (ii) that the last two inputs were the same. By “mixed evidence” we mean that there was at least one input that was followed by both outputs; this restriction was chosen because cases with no mixed evidence were uniformly felt to be easy decisions (just do what worked last time). The second restriction was an arbitrary choice to reduce the sample to a manageable size.

  9. 9.

    These questions speak to the problem of what inductive bias is appropriate for AGI. There are of course theoretical proposals like Solomonoff induction [21], but we are motivated by a desire to address the issue empirically.

  10. 10.

    One rule actually proposes both outputs as acceptable.

  11. 11.

    As it turns out, this mostly applies to group 3 rules. There is only one case in which a 1A rule suppresses a group 2 rule, namely the \(AAAAA/B-B-\) situation.

References

  1. Angluin, D.: Learning regular sets from queries and counterexamples. Inform. Comput. 75(2), 87–106 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bergman, R.: Learning World Models in Environments with Manifest Causal Structure. Ph.D. thesis, Massachusetts Insitute of Technology (1995)

    Google Scholar 

  3. Chrisman, L.: Reinforcement learning with perceptual aliasing: the perceptual distinctions approach. In: Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 183–188. AAAI Press (1992)

    Google Scholar 

  4. Drescher, G.L.: Made-Up Minds: A Constructivist Approach to Artificial Intelligence. MIT Press, Cambridge (1991)

    MATH  Google Scholar 

  5. Ericsson, K.A., Simon, H.A.: Protocol Analysis: Verbal Reports as Data. MIT Press, Cambridge (1993)

    Google Scholar 

  6. Gold, E.M.: Complexity of automaton identification from given data. Inform. Control 37(3), 302–320 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  7. Goodman, N.D., Tenenbaum, J.B., Feldman, J., Griffiths, T.L.: A rational analysis of rule-based concept learning. Cogn. Sci. 32(1), 108–154 (2008)

    Article  Google Scholar 

  8. Hofstadter, D.: Fluid Concepts & Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. Basic Books, New York (1995)

    Google Scholar 

  9. Jackendoff, R.: Language, Consciousness, Culture: Essays on Mental Structure. MIT Press, Cambridge (2007)

    Google Scholar 

  10. Kotovsky, K., Simon, H.A.: Empirical tests of a theory of human acquisition of concepts for sequential patterns. Cogn. Psychol. 4(3), 399–424 (1973)

    Article  Google Scholar 

  11. Mahabal, A.A.: Seqsee: A Concept-centered Architecture for Sequence Perception. Ph.D. thesis, Indiana University Bloomington (2009)

    Google Scholar 

  12. McCallum, A.K.: Reinforcement learning with selective perception and hidden state. Ph.D. thesis, University of Rochester (1996)

    Google Scholar 

  13. McCallum, R.A.: Instance-based utile distinctions for reinforcement learning with hidden state. In: ICML, pp. 387–395 (1995)

    Google Scholar 

  14. Mealy, G.H.: A method for synthesizing sequential circuits. Bell Syst. Tech. J. 34(5), 1045–1079 (1955)

    Article  MathSciNet  Google Scholar 

  15. Meredith, M.J.E.: Seek-Whence: A model of pattern perception. Ph.D. thesis, Indiana University (1986)

    Google Scholar 

  16. Newell, A., Simon, H.A.: Human Problem Solving. Prentice-Hall, Englewood Cliffs (1972)

    Google Scholar 

  17. Piaget, J.: The Origins of Intelligence in Children. International Universities Press, New York (1952)

    Book  Google Scholar 

  18. Ron, D., Singer, Y., Tishby, N.: The power of Amnesia: learning probabilistic automata with variable memory length. Mach. Learn. 25(2–3), 117–149 (1996)

    Article  MATH  Google Scholar 

  19. Shahbaz, M., Groz, R.: Inferring mealy machines. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 207–222. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  20. Simon, H.A., Kotovsky, K.: Human acquisition of concepts for sequential patterns. Psychol. Rev. 70(6), 534 (1963)

    Article  Google Scholar 

  21. Solomonoff, R.J.: A formal theory of inductive inference. Part I. Inform. Control 7(1), 1–22 (1964)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The author wishes to thank the anonymous reviewers and An-Dinh Nguyen for helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sean Markan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Markan, S. (2016). System Induction Games and Cognitive Modeling as an AGI Methodology. In: Steunebrink, B., Wang, P., Goertzel, B. (eds) Artificial General Intelligence. AGI 2016. Lecture Notes in Computer Science(), vol 9782. Springer, Cham. https://doi.org/10.1007/978-3-319-41649-6_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-41649-6_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-41648-9

  • Online ISBN: 978-3-319-41649-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics