Skip to main content
Log in

Reactive virtual agent learning for NUI-based HRI applications

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The natural user interface (NUI) has been investigated in a variety of fields in application software. This paper proposes an approach to generate virtual agents that can support users for NUI-based applications through human–robot interaction (HRI) learning in a virtual environment. Conventional human–robot interaction (HRI) learning is carried out by repeating processes that are time-consuming, complicated and dangerous because of certain features of robots. Therefore, a method is needed to train virtual agents that interact with virtual humans imitating human movements in a virtual environment. Then the result of this virtual agent can be applied to NUI-based interactive applications after the interaction learning is completed. The proposed method was applied to a model of a typical house in virtual environment with virtual human performing daily-life activities such as washing, eating, and watching TV. The results show that the virtual agent can predict a human’s intent, identify actions that are helpful to the human, and can provide services 16 % faster than a virtual agent trained using traditional Q-learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Berger E, Amor HB, Vogt D, Jung B (2008) Towards a Simulator for Imitation Learning with Kinesthetic Bootstrapping. Proceedings of Intl. Conf. of SIMULATION, MODELING and PROGRAMMING for AUTONOMOUS ROBOTS, November 3–4; Venice, Italy

  2. Bicho E (2009) A dynamic field approach to goal inference and error monitoring for human–robot interaction. Proceedings of the 2009 International Symposium on New Frontiers in Human–Robot Interaction, April 8–9; Edinburgh, Scotland

  3. Calinon S, Sardellitti I, Caldwell DG (2010) Learning-based control strategy for safe human–robot interaction exploiting task and robot redundancies. Proceedings of The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 18–22; Taipei, Taiwan

  4. Lim M, Lee Y (2013) A simulation model of object movement for evaluating the communication load in networked virtual environments. J Inf Process Syst 9(3):489–498

    Article  MathSciNet  Google Scholar 

  5. Mason M, Lopes M (2011) Robot Self-Initiative and Personalization by Learning through Repeated Interactions. Proceedings of 6th ACM/IEEE International Conference on Human–Robot, March 6–9; Lausanne, Switzerland

  6. Mirzaei O, Akbarzadeh MR-T (2013) A novel learning algorithm based on a multi-agent structure for solving multi-mode resource-constrained project scheduling problem. J Converg 4(1):47–52

    Google Scholar 

  7. Mozgovoy M, Efimov R (2013) WordBricks: a virtual language lab inspired by Scratch environment and dependency grammars. Hum Centric Comput Inf Sci 3(5)

  8. Sung Y, Cho K (2012) Collaborative programming by demonstration in a virtual environment. IEEE Intell Syst 27(2):14–17

    Article  Google Scholar 

  9. Thurau C, Paczian T, Bauckhage C (2005) Is Bayesian Imitation Learning the Route to Believable Gamebots, Proceedings of GAME-ON, November 24–25; Leicester, United Kingdom

  10. Yamaguchi Y, Masubuchi M, Tanaka Y, Yachida M (1996) Propagating Learned Behaviors from a Virtual Agent to a Physical Robot in Reinforcement Learning. Proceedings of International Conference on Evolutionary Computation, May 20–22; Nagoya, Japan

Download references

Acknowledgments

This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (NIPA-2014-H0301-14-1021) supervised by the NIPA (National IT Industry Promotion Agency). And this work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0011266). And this paper is extended and improved from accepted paper of KCIC-2013/FCC-2014 conferences.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kyungeun Cho.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, D., Cho, S., Sung, Y. et al. Reactive virtual agent learning for NUI-based HRI applications. Multimed Tools Appl 75, 15157–15170 (2016). https://doi.org/10.1007/s11042-014-2048-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-014-2048-5

Keywords

Navigation