Skip to main content

Knowledge-Based Robotic Agent as a Game Player

  • Conference paper
  • First Online:
PRICAI 2019: Trends in Artificial Intelligence (PRICAI 2019)

Abstract

To investigate, how a Robot’s communication ability in terms of explanations can cultivate better trust relationships between a Robot and it’s human teammates. We opted a partial information game-playing environment, to immerse interaction between humans and a Robotic Agent. We designed our Robotic Agent as a Knowledge-Based (KB) Robotic Agent that does not play perfectly, but plays with significant expertise and approximates well enough by updating it’s belief all the time in a partially observable environment. We developed the explanation-generation mechanism on top of the game that generates meaningful explanations for the strategy of a game at a level that the human teammates appreciate and understand. The generated explanations adapt according to the game situation that can increase human’s overall understanding of the task domain. We evaluated the individual effectiveness of our KB Robotic Agent, by developing a Case Study with the partial information game Domino. In a computational experiment, our KB Robotic Agent played 10,000 game matches with other agents and exhibited a reasonable winning rate. With this victory proportion, we can conclude that our KB Robotic Agent captured and analysed all available information intelligently and forecast the possible moves of the opponents correctly.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Here task means game.

  2. 2.

    NAOqi is the main programming framework, that runs on the robot and controls it.

  3. 3.

    A HAND represents a set of seven tiles of a player.

  4. 4.

    In some literature, it is considered as sensor model, that is the probability of observing E given that H is true.

  5. 5.

    Sure means the belief computed after seeing the evidence, which is basically posterior probability because it reflects the level of belief computed in the light of the new evidence.

  6. 6.

    By keeping track of the moves played by each player including passes and the number of tiles in other players’ HAND.

  7. 7.

    Because it is the probability of the hypothesis given the observed evidence.

  8. 8.

    When \(Prob(E \mid H)\) is considered as likelihood, it is seen as a function of H with E fixed. It indicates the compatibility of the evidence with the given hypothesis.

  9. 9.

    Open ends on the board, the sizes of all other players’ HANDS, moves played by every player including passes and current belief space.

  10. 10.

    Choose an adequate tile to play.

  11. 11.

    Which means based on the goal state, our KB Robotic Agent tracks back to find the game state in which the decision was made. To do so, the KB Robotic Agent basically leaves a reasoning trace behind a goal tree, which makes it possible to answer questions about it’s behaviour (decision-made).

  12. 12.

    We did not make strategies of playing the game, as a part of Static Explanations because we want that the human teammates should learn different strategies by observing the KB Robotic Agent’s way of playing (decisions).

  13. 13.

    For example, in the game Domino, pass information of a player (which player passes it’s turn and on which tile number).

  14. 14.

    A complete set of 28 Domino tiles in terms of the “number” of tiles that left in each player’s HAND.

References

  1. Yagoda, R.E., Gillan, D.J.: You want me to trust a ROBOT? The development of a human-robot interaction trust scale. Int. J. Soc. Robot. 4(3), 235–248 (2012)

    Article  Google Scholar 

  2. Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58(6), 697–718 (2003)

    Article  Google Scholar 

  3. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)

    Article  Google Scholar 

  4. Nothdurft, F., Minker, W.: Justification and transparency explanations in dialogue systems to maintain human-computer trust. In: Rudnicky, A., Raux, A., Lane, I., Misu, T. (eds.) Situated Dialog in Speech-Based Human-Computer Interaction. SCT, pp. 41–50. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-21834-2_4

    Chapter  Google Scholar 

  5. Adams, B.D., Bruyn, L.E., Houde, S., Angelopoulos, P.A.: Trust in Automated Systems: Literature Review, 1st edn. Humansystems Incorporated, Canada (2003)

    Google Scholar 

  6. Billings, D.R., Schaefer, K.E., Chen, J.Y., Hancock, P.A.: Human-robot interaction: developing trust in robots. In: Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 109–110. ACM, Boston (2012)

    Google Scholar 

  7. Nothdurft, F., Richter, F., Minker, W.: Probabilistic human-computer trust handling. In: Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), Philadelphia, U.S.A, pp. 51–59 (2014)

    Google Scholar 

  8. Nothdurft, F., Ultes, S., Minker, W.: Finding appropriate interaction strategies for proactive dialogue systems—an open quest. In: Proceedings of the 2nd European and the 5th Nordic Symposium on Multimodal Communication, pp. 73–80. Linköping University Electronic Press, Tartu (2015)

    Google Scholar 

  9. Sørmo, F., Cassens, J.: Explanation goals in case-based reasoning. In: Proceedings of the ECCBR 2004 Workshops, Number 142–04, pp. 165–174 (2004)

    Google Scholar 

  10. Xin, M., Sharlin, E.: Playing games with robots-a method for evaluating human-robot interaction. In: Human Robot Interaction, p. 522. Itech Education and Publishing, Vienna (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Misbah Javaid .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Javaid, M., Estivill-Castro, V., Hexel, R. (2019). Knowledge-Based Robotic Agent as a Game Player. In: Nayak, A., Sharma, A. (eds) PRICAI 2019: Trends in Artificial Intelligence. PRICAI 2019. Lecture Notes in Computer Science(), vol 11672. Springer, Cham. https://doi.org/10.1007/978-3-030-29894-4_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-29894-4_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-29893-7

  • Online ISBN: 978-3-030-29894-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics