Skip to main content

Application of Instruction-Based Behavior Explanation to a Reinforcement Learning Agent with Changing Policy

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10634))

Included in the following conference series:

Abstract

Agents that acquire their own policies autonomously have the risk of accidents caused by the agents’ unexpected behavior. Therefore, it is necessary to improve the predictability of the agents’ behavior in order to ensure the safety. Instruction-based Behavior Explanation (IBE) is a method for a reinforcement learning agent to announce the agent’s future behavior. However, it was not verified that the IBE was applicable to an agent that changes the policy dynamically. In this paper, we consider agents under training and improve the IBE for the application to agents with changing policy. We conducted an experiment to verify if the behavior explanation model of an immature agent worked even after the agent’s further training. The results indicated the applicability of the improved IBE to agents under training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Man, D.: Concrete problems in AI safety. arXiv preprint (2016). arXiv:1606.06565

  2. Le, Q.V.: Building high-level features using large scale unsupervised learning. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8595–8598 (2013)

    Google Scholar 

  3. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). doi:10.1007/978-3-319-10590-1_53

    Google Scholar 

  4. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint (2013). arXiv:1312.6034

  5. Elizalde, F., Sucar, L.E., Luque, M., Dez, F.J., Ballesteros, A.R.: Policy explanation in factored markov decision processes. In: Proceedings of the 4th European Workshop on Probabilistic Graphical Models (2008)

    Google Scholar 

  6. Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 303–312. ACM (2017)

    Google Scholar 

  7. Fukuchi, Y., Osawa, M., Yamakawa, H., Imai, M.: Autonomous self-explanation of behavior for interactive reinforcement learning agents. In: Proceedings of the 5th International Conference on Human Agent Interaction (2017)

    Google Scholar 

  8. Knox, W.B., Stone, P.: Interactively shaping agents via human reinforcement: the tamer framework. In: Proceedings of the Fifth International Conference on Knowledge Capture, pp. 9–16. ACM (2009)

    Google Scholar 

  9. Cruz, F., Magg, S., Weber, C., Wermter, S.: Training agents with interactive reinforcement learning and contextual affordances. IEEE Trans. Cognit. Dev. Syst. 8(4), 271–284 (2016)

    Article  Google Scholar 

  10. Thomaz, A.L., Breazeal, C.: Reinforcement learning with human teachers: evidence of feedback and guidance with implications for learning performance. In: The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference (AAAI), vol. 6, pp. 1000–1005 (2006)

    Google Scholar 

  11. Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., Zaremba, W.: Openai gym. arXiv preprint (2016). arXiv:1606.01540

  12. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, G.B., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human level control through deep reinforcement learning. Nature 518, 529–533 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yosuke Fukuchi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Fukuchi, Y., Osawa, M., Yamakawa, H., Imai, M. (2017). Application of Instruction-Based Behavior Explanation to a Reinforcement Learning Agent with Changing Policy. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10634. Springer, Cham. https://doi.org/10.1007/978-3-319-70087-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70087-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70086-1

  • Online ISBN: 978-3-319-70087-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics