ABSTRACT
Efficient computation of strategic movements is essential to control virtual avatars intelligently in computer games and 3D virtual environments. Such a module is needed to control non-player characters (NPCs) to fight, play team sports or move through a mass crowd. Reinforcement learning is an approach to achieve real-time optimal control. However, the huge state space of human interactions makes it difficult to apply existing learning methods to control avatars when they have dense interactions with other characters. In this research, we propose a new methodology to efficiently plan the movements of an avatar interacting with another. We make use of the fact that the subspace of meaningful interactions is much smaller than the whole state space of two avatars. We efficiently collect samples by exploring the subspace where dense interactions between the avatars occur and favor samples that have high connectivity with the other samples. Using the collected samples, a finite state machine (FSM) called Interaction Graph is composed. At run-time, we compute the optimal action of each avatar by minmax search or dynamic programming on the Interaction Graph. The methodology is applicable to control NPCs in fighting and ball-sports games.
- Arikan, O., and Forsyth, D. 2002. Motion generation from examples. ACM Transactions on Graphics 21, 3, 483--490. Google ScholarDigital Library
- Gleicher, M., Shin, H. J., Kovar, L., and Jepsen, A. 2003. Snap-together motion: assembling run-time animations. ACM Transactions on Graphics 22, 3, 702. Google ScholarDigital Library
- Gleicher, M. 1998. Retargetting motion to new characters. Computer Graphicsi Proceedings, Annual Conference Series (SIGGRAPH '98), 33--42. Google ScholarDigital Library
- Graepel, T., Herbrich, R., and Gold, J. 2004. Learning to fight. Proceedings of Computer Games: Artificial Intelligence Design and Education (CGAIDE 2004), 193--200.Google Scholar
- Ikemoto, L., Arikan, O., and Forsyth, D. 2005. Learning to move autonomously in a hostile world. Technical Report No. UCB/CSD-5-1395, University of California, Berkeley.Google Scholar
- Kovar, L., Gleicher, M., and Pighin, F. 2002. Motion graphs. ACM Transactions on Graphics 21, 3, 473--482. Google ScholarDigital Library
- Kwon, T., and Shin, S. Y. 2005. Motion modeling for on-line locomotion synthesis. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 29--38. Google ScholarDigital Library
- Lau, M., and Kuffner, J. J. 2005. Behavior planning for character animation. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 271--280. Google ScholarDigital Library
- Lee, J., and Lee, K. H. 2004. Precomputing avatar behavior from human motion data. Proceedings of 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 79--87. Google ScholarDigital Library
- Lee, J., and Shin, S. Y. 1999. A hierarchical approach to interactive motion editing for human-like figures. Proceedings of SIGGRAPH'99, 39--48. Google ScholarDigital Library
- Lee, J., Chai, J., Reitsma, P. S. A., Hodgins, J. K., and Pollard, N. S. 2002. Interactive control of avatars animated with human motion data. ACM Transactions on Graphics 21, 3, 491--500. Google ScholarDigital Library
- Liu, C. K., Hertzmann, A., and Popović, Z. 2006. Composition of complex optimal multi-character motions. ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 215--222. Google ScholarDigital Library
- McCann, J., and Pollard, N. S. 2007. Responsive characters from motion fragments. ACM Transactions on Graphics 26, 3. Google ScholarDigital Library
- Park, S. I., Kwon, T., Shin, H. J., and Shin, S. Y. 2004. Analysis and synthesis of interactive two-character motions. Technical Note, KAIST, CS/TR-2004-194.Google Scholar
- Rummery, G. A., and Niranjan, M. 1994. On-line q-learning using connectionist systems. Technical Report CUED/F-1NFENG/TR 166, Cambridge University Engineering Department.Google Scholar
- Shum, H. P., Komura, T., and Yamazaki, S. 2007. Simulating competitive interactions using singly captured motions. Proceeedings of ACM Virtual Reality Software Technology 2007, 65--72. Google ScholarDigital Library
- Sutton, R. S., and Barto, A. G. 1998. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA. Google ScholarDigital Library
- Treuille, A., Lee, Y., and Popović, Z. 2007. Near-optimal character animation with continuous control. ACM Transactions on Graphics 26, 3. Google ScholarDigital Library
- Watkins, C. 1989. Learning from Delayed Rewards. PhD thesis, Cambridge University.Google Scholar
Index Terms
Simulating interactions of avatars in high dimensional state space
Recommendations
Observing Virtual Avatars: The Impact of Avatars’ Fidelity on Identifying Interactions
Academic Mindtrek '21: Proceedings of the 24th International Academic Mindtrek ConferenceThere are many cases where observing the interactions of a virtual avatar can be useful. However, it is unclear to what extent the avatar fidelity affects bystanders’ performance when observing gestures performed by avatars. We, therefore, conducted an ...
Hybrid avatars: enabling co-presence in multiple realities
Web3D '16: Proceedings of the 21st International Conference on Web3D TechnologyVirtual reality (VR) and augmented reality (AR) technologies are quickly making their way into people's everyday lives. Typically, these technologies are used separately to create either plain VR or AR applications rather than harnessing the ...
Dyadic interactions with avatars in immersive virtual environments: high fiving
SAP '15: Proceedings of the ACM SIGGRAPH Symposium on Applied PerceptionCollaborative immersive virtual environments allow the behavior of one user to be observed by other users. In particular, behavior of users in such an environment is represented by each user possessing a self-avatar, a digital representation of ...
Comments