Abstract
In this paper, we propose a new approach for generating interactive behaviours for virtual characters, namely the windowed Viterbi algorithm, capable of doing so in real-time. Consequently, we compare the performance of the standard Viterbi algorithm and the windowed Viterbi algorithm within our system. Our system tracks and analyses the behaviour of a real person in video input and produces a fully articulated three dimensional (3D) character interacting with the person in the video input. Our system is model-based. Prior to tracking, we train a collection of dual-input Hidden Markov Model (HMM) on 3D motion capture (MoCap) data representing a number of interactions between two people. Then using the dual-input HMM, we generate a moving virtual character reacting to (the motion of) a real person. In this article, we present the detailed evaluation of using the windowed Viterbi algorithms within our system, and show that our approach is suitable for generating interactive behaviours in real-time. Furthermore, in order to enhance the tracking capabilities of the algorithm, we develop a novel technique that splits the complex motion data in an automated way. This results in improved tracking of the human motion from our model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Badler, N.I.: Real-Time Virtual Humans. In: Proceedings of 5th Pacific Conference on Computer Graphics and Applications, October 1997, pp. 4–13 (1997)
Brand, M.: An Entropic Estimator for Structure Discovery. In: Proceedings of Neural Information Processing Systems, pp. 723–729 (1998)
Capin, T.K., Noser, H., Thalmann, D., Pandzic, I.S., Thalmann, N.M.: Virtual Human Representation and Communication in the VLNet Networked Virtual Environments. IEEE Computer Graphics and Applications 17(2), 42–53 (1997)
Cosker, D., Marshall, D., Rosin, P., Hicks, Y.A.: Speech Driven Facial Animation using a Hidden Markov Coarticulation Model. In: 17th International Conference on Pattern Recognition (ICPR), August 2004, vol. 1, pp. 128–131 (2004)
Deutscher, J., Blake, A., Reid, I.: Articulated Body Motion Capture by Annealed Particle Filtering. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 126–133 (2000)
Jebara, T., Pentland, A.: Statistical Imitative Learning from Perceptual Data. In: Proceedings of the 2nd International Conference on Development and Learning, June 2002, pp. 191–196 (2002)
Johansson, G.: Visual Perception of Biological Motion and a Model for its Analysis. Perception and Psychophysics 14(2), 201–211 (1973)
Jonson, N., Galata, A., Hogg, D.: The Acquisition and Use of Interaction Behaviour Model. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 1998, pp. 866–871 (1998)
Meredith, M., Maddock, S.: Adapting Motion Capture Data using Weighted Real-time Inverse Kinematics. Comput. Entertain. 3(1), 5 (2005)
Pilu, M.: Video Stabilization as A Variational Problem and Numerical Solution with the Viterbi Method. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 625–630 (2004)
Rabiner, L.R.: A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE 77(2), 257–286 (1989)
Ripley, B.D.: Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge (1996)
Rybski, P.E., Veloso, M.M.: Robust Real-Time Human Activity Recognition from Tracked Face Displacements. In: Proceedings of the 12th Portuguese Conference on Artificial Intelligence, December 2005, vol. 1, pp. 87–98 (2005)
Castleman, K.R.: Digital Image Processing. Prentice Hall International, Englewood Cliffs (1996)
Sonka, M., Hlavac, V., Boyle, R.: Image Processing, Analysis, and Machine Vision, 2nd edn. International Thomson Publishing (1998)
Wen, G.J., Wang, Z.Q., Xia, S.H., Zhu, D.M.: From Motion Capture Data to Character Animation. In: Proceedings of the ACM symposium on Virtual reality software and technology, pp. 165–168 (2006)
Zheng, Y., Hicks, Y., Cosker, D., Marshall, D., Mostaza, J.C., Chambers, J.A.: Virtual Friend: Tracking and Generating Natural Interactive Behaviours in Real Video. In: Proceedings of the 8th Internation Conference on Signal Processing (ICSP), November 2006, vol. 2 (2006)
Zordan, V.B., Van Der Horst, N.C.: Mapping optical motion capture data to skeletal motion using a physical model. In: Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 245–250 (2003)
PhaseSpace Moti on Digitizer System (2007), http://www.phasespace.com/
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zheng, Y., Hicks, Y., Marshall, D., Cosker, D. (2009). Real-Time Generation of Interactive Virtual Human Behaviours. In: Ranchordas, A., Araújo, H.J., Pereira, J.M., Braz, J. (eds) Computer Vision and Computer Graphics. Theory and Applications. VISIGRAPP 2008. Communications in Computer and Information Science, vol 24. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-10226-4_6
Download citation
DOI: https://doi.org/10.1007/978-3-642-10226-4_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-10225-7
Online ISBN: 978-3-642-10226-4
eBook Packages: Computer ScienceComputer Science (R0)