Abstract
Teaching robots new skills using minimal time and effort has long been a goal of artificial intelligence. This paper investigates the use of game theoretic representations to represent interactive games and learn their win conditions by interacting with a person. Game theory provides the formal underpinnings needed to represent the structure of a game including the goal conditions. Learning by demonstration, has long sought to leverage a robot’s interactions with a person to foster learning. This paper combines these two approaches allowing a robot to learn a game-theoretic representation by demonstration. This paper demonstrates how a robot can be taught the win conditions for the game Connect Four using a single demonstration and a few trial examples with a question and answer session led by the robot. Our results demonstrate that the robot can learn any win condition for the standard rules of the Connect Four game, after demonstration by a human, irrespective of the color or size of the board and the chips. Moreover, if the human demonstrates a variation of the win conditions, we show that the robot can learn the respective changed win condition.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Mülling, K., Kober, J., Kroemer, O., Peters, J.: Learning to select and generalize striking movements in robot table tennis. Int. J. Robot. Res. (IJRR) 32, 263–279 (2013)
Pastor, P., Hoffmann, H., Asfour, T., Schaal, S.: Learning and generalization of motor skills by learning from demonstration. In: International Conference on Robotics and Automation (ICRA) (2009)
Yu, T., et al.: One-shot imitation from observing humans via domain-adaptive meta-learning. In: Robotics: Science and Systems (RSS), Pittsburgh, PA, USA, June 2018
Silver, D., et al.: Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv Reprint arXiv:1712.01815 (2017)
Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016)
Kamei, K., Kakizoe, Y.: An approach to the development of a game agent based on SOM and reinforcement learning. In: 5th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI), Kumamoto, Japan (2016)
Dobrovsky, A., Borghoff, U.M., Hofmann, M.: An approach to interactive deep reinforcement learning for serious games. In: 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Wroclaw, Poland (2016)
Rana, M.A., Mukadam, M., Ahmadzadeh, S.R., Chernova, S., Boots, B.: Towards robust skill generalization: unifying learning from demonstration and motion planning. In: 2017 Conference on Robot Learning (CoRL) (2017)
Whitney, D., Rosen, E., MacGlashan, J., Wong, L.L., Tellex, S.: Reducing errors in object-fetching interactions through social feedback. In: IEEE International Conference on Robotics and Automation (ICRA), Singapore (2017)
Racca, M., Kyrki, V.: Active robot learning for temporal task models. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA (2018)
Cakmak, M., Thomaz, A.L.: Designing robot learners that ask good questions. In: Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, USA (2012)
Hinrichs, T.R., Forbus, K.D.: Goes first: teaching simple games through multimodal interaction. Adv. Cogn. Syst. 3, 31–46 (2014)
Stearns, B., Laird, J.E.: Modeling instruction fetch in procedural learning. In: 16th International Conference on Cognitive Modelling (ICCM), Madison, WI (2018)
Oh, J.H., et al.: Toward mobile robots reasoning like humans. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI), January 2015
Osborne, M.J., Rubinstein, A.: A Course in Game Theory. MIT Press, Cambridge (1994)
Lee, K.W., Hwang, J.-H.: Human-robot interaction as a cooperative game. In: Castillo, O., Xu, L., Ao, S.I. (eds.) Trends in Intelligent Systems and Computer Engineering (IMECS 2007). LNCS, vol. 6, pp. 91–103. Springer, Boston (2008). https://doi.org/10.1007/978-0-387-74935-8_6
Wagner, A.: Using games to learn games: game-theory representations as a source for guided social learning. In: Agah, A., Cabibihan, J.-J., Howard, A.M., Salichs, M.A., He, H. (eds.) ICSR 2016. LNCS (LNAI), vol. 9979, pp. 42–51. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47437-3_5
Connect Four Demo: Rethink Robotics. http://sdk.rethinkrobotics.com/wiki/Connect_Four_Demo
Clark, H.H., Brennan, S.E.: Grounding in communication. Perspect. Socially Shar. Cogn. 13, 127–149 (1991)
Acknowledgement
Support for this research was provided by Penn State’s Teaching and Learning with Technology (TLT) Fellowship.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Ayub, A., Wagner, A.R. (2018). Learning to Win Games in a Few Examples: Using Game-Theory and Demonstrations to Learn the Win Conditions of a Connect Four Game. In: Ge, S., et al. Social Robotics. ICSR 2018. Lecture Notes in Computer Science(), vol 11357. Springer, Cham. https://doi.org/10.1007/978-3-030-05204-1_34
Download citation
DOI: https://doi.org/10.1007/978-3-030-05204-1_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05203-4
Online ISBN: 978-3-030-05204-1
eBook Packages: Computer ScienceComputer Science (R0)