Skip to main content

Learning to Win Games in a Few Examples: Using Game-Theory and Demonstrations to Learn the Win Conditions of a Connect Four Game

  • Conference paper
  • First Online:
Social Robotics (ICSR 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11357))

Included in the following conference series:

Abstract

Teaching robots new skills using minimal time and effort has long been a goal of artificial intelligence. This paper investigates the use of game theoretic representations to represent interactive games and learn their win conditions by interacting with a person. Game theory provides the formal underpinnings needed to represent the structure of a game including the goal conditions. Learning by demonstration, has long sought to leverage a robot’s interactions with a person to foster learning. This paper combines these two approaches allowing a robot to learn a game-theoretic representation by demonstration. This paper demonstrates how a robot can be taught the win conditions for the game Connect Four using a single demonstration and a few trial examples with a question and answer session led by the robot. Our results demonstrate that the robot can learn any win condition for the standard rules of the Connect Four game, after demonstration by a human, irrespective of the color or size of the board and the chips. Moreover, if the human demonstrates a variation of the win conditions, we show that the robot can learn the respective changed win condition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mülling, K., Kober, J., Kroemer, O., Peters, J.: Learning to select and generalize striking movements in robot table tennis. Int. J. Robot. Res. (IJRR) 32, 263–279 (2013)

    Article  Google Scholar 

  2. Pastor, P., Hoffmann, H., Asfour, T., Schaal, S.: Learning and generalization of motor skills by learning from demonstration. In: International Conference on Robotics and Automation (ICRA) (2009)

    Google Scholar 

  3. Yu, T., et al.: One-shot imitation from observing humans via domain-adaptive meta-learning. In: Robotics: Science and Systems (RSS), Pittsburgh, PA, USA, June 2018

    Google Scholar 

  4. Silver, D., et al.: Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv Reprint arXiv:1712.01815 (2017)

  5. Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016)

    Article  Google Scholar 

  6. Kamei, K., Kakizoe, Y.: An approach to the development of a game agent based on SOM and reinforcement learning. In: 5th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI), Kumamoto, Japan (2016)

    Google Scholar 

  7. Dobrovsky, A., Borghoff, U.M., Hofmann, M.: An approach to interactive deep reinforcement learning for serious games. In: 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Wroclaw, Poland (2016)

    Google Scholar 

  8. Rana, M.A., Mukadam, M., Ahmadzadeh, S.R., Chernova, S., Boots, B.: Towards robust skill generalization: unifying learning from demonstration and motion planning. In: 2017 Conference on Robot Learning (CoRL) (2017)

    Google Scholar 

  9. Whitney, D., Rosen, E., MacGlashan, J., Wong, L.L., Tellex, S.: Reducing errors in object-fetching interactions through social feedback. In: IEEE International Conference on Robotics and Automation (ICRA), Singapore (2017)

    Google Scholar 

  10. Racca, M., Kyrki, V.: Active robot learning for temporal task models. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA (2018)

    Google Scholar 

  11. Cakmak, M., Thomaz, A.L.: Designing robot learners that ask good questions. In: Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, USA (2012)

    Google Scholar 

  12. Hinrichs, T.R., Forbus, K.D.: Goes first: teaching simple games through multimodal interaction. Adv. Cogn. Syst. 3, 31–46 (2014)

    Google Scholar 

  13. Stearns, B., Laird, J.E.: Modeling instruction fetch in procedural learning. In: 16th International Conference on Cognitive Modelling (ICCM), Madison, WI (2018)

    Google Scholar 

  14. Oh, J.H., et al.: Toward mobile robots reasoning like humans. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI), January 2015

    Google Scholar 

  15. Osborne, M.J., Rubinstein, A.: A Course in Game Theory. MIT Press, Cambridge (1994)

    MATH  Google Scholar 

  16. Lee, K.W., Hwang, J.-H.: Human-robot interaction as a cooperative game. In: Castillo, O., Xu, L., Ao, S.I. (eds.) Trends in Intelligent Systems and Computer Engineering (IMECS 2007). LNCS, vol. 6, pp. 91–103. Springer, Boston (2008). https://doi.org/10.1007/978-0-387-74935-8_6

    Chapter  Google Scholar 

  17. Wagner, A.: Using games to learn games: game-theory representations as a source for guided social learning. In: Agah, A., Cabibihan, J.-J., Howard, A.M., Salichs, M.A., He, H. (eds.) ICSR 2016. LNCS (LNAI), vol. 9979, pp. 42–51. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47437-3_5

    Chapter  Google Scholar 

  18. Connect Four Demo: Rethink Robotics. http://sdk.rethinkrobotics.com/wiki/Connect_Four_Demo

  19. Clark, H.H., Brennan, S.E.: Grounding in communication. Perspect. Socially Shar. Cogn. 13, 127–149 (1991)

    Article  Google Scholar 

Download references

Acknowledgement

Support for this research was provided by Penn State’s Teaching and Learning with Technology (TLT) Fellowship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Ayub .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ayub, A., Wagner, A.R. (2018). Learning to Win Games in a Few Examples: Using Game-Theory and Demonstrations to Learn the Win Conditions of a Connect Four Game. In: Ge, S., et al. Social Robotics. ICSR 2018. Lecture Notes in Computer Science(), vol 11357. Springer, Cham. https://doi.org/10.1007/978-3-030-05204-1_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-05204-1_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-05203-4

  • Online ISBN: 978-3-030-05204-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics