Abstract:
This paper creates a Chinese interactive virtual character based on multi-modal mapping and rules, which receives information from the input modules and generates audio a...Show MoreMetadata
Abstract:
This paper creates a Chinese interactive virtual character based on multi-modal mapping and rules, which receives information from the input modules and generates audio and visual speech, face expressions and body animations. The audio and visual speech are synthesized from the input text by multi-modal mapping, while face expressions and body movements are rule-based driven by emotion states. All of the original animations are captured by a motion capture system and plotted into a character model, which is created by the 3D creation software. We use a skeletal open source animation engine to create the scene in which the virtual character can talk like human communicating with users. The whole expression of this virtual character is considered very natural and realistic.
Date of Conference: 18-21 September 2011
Date Added to IEEE Xplore: 31 October 2011
ISBN Information: