Abstract
Audio-to-visual synchronization is important for multimedia applications involving talking human, either natural or synthetic. Close correlation exists between the acoustic speech signal and visible lip movement that can be exploited in developing real-time audio-to-visual conversions. In this article, we apply ART2 and a multi-audio-frame technique to derive lip movement sequence from its corresponding audio speech stream. The training process of ART2 is fast and it is capable of learning new things without necessarily forgetting things learned in the past. In the case of multi-user adaptation, we proposed a system which uses one user’s ART2 model as the reference model together with audio adapting and visual learning mechanism for new user adaptation. The audio adaptation maps new user’s audio features into reference model audio features, and the visual learning makes the reference ART2 model learn the new speech characteristics of the new user. Experimental results had shown that the proposed ART2-based method is both fast and effective for single user and multiuser.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Stork, D.G., Wolff, G., Levine, E.: Neural network lipreading system for improved speech recognition. In: Proc. Int. Joint Conf. Neural Networks, pp. 285-295 (1992)
Haykin, S.: Neural Networks, pp. 156–255, 466—173. Prentice Hall, Englewood Cliffs (1999)
Rao, R.R., Chen, T.: Audio-to-Visual Conversion for Multimedia Communication. IEEE Transactions on Industrial Electronics 45(1), 15–22 (1998)
Rao, R.R., Chen, T.: Audio-to-Visual Integration in Multimedia Communication. Proceedings ofthe IEEE 86(5), 837–852 (1998)
Chang, Y.-J., Chen, C.-C., Chou, J.-C., Chen, Y.-C.: Virtual Talk: A Model-Based Virtual Phone Using a Layered Audio-Visual Integration. IEEE Multimedia and Expo, 2000. ICME 2000 IEEE International Conference on Multimedia and Expo. vol.1, pp. 415–418 (2000)
Lavagetto, F.: Time-Delay Neural Networks for Estimating Lip Movements From Speech Analysis: A Useful Tool in Audio-Video Synchronization. IEEE Transactions on Circuits and systems for Video Technology 7(5), 786–800 (1997)
Rao, R., Mersereau, R., Chen, T.: Using HMM’s in Audio-to Visual Conversion. In: IEEE 1997 First Workshop on Multimedia Signal Processing, pp. 19–24 (1997)
Choi, K., Hwang, J.-N.: Baum-Welch Hidden Markov Model Inversion for Reliable Audio-to-Visual Conversion. In: IEEE 3rd Workshop on Multimedia Signal Processing (MMSP 1999), Copenhagen, Denmark, September 1999, pp. 175–180, 13-15 (1999)
Carpenter, G.A., Grossberg, S.: Art2:Self-organization of stable category recognition codes for analog input patterns. Applied Optics 26(23), 4919–4930 (1997)
Carpenter, G.A., Grossberg, S.: The art of adaptive pattern recognition by self-organizing neural network. IEEE computer 21(3), 77–88 (1988)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lin, JJ., Hsieh, CW., Jong, TL. (2003). Investigation of ART2-Based Audio-to-Visual Conversion for Multimedia Applications. In: Liu, J., Cheung, Ym., Yin, H. (eds) Intelligent Data Engineering and Automated Learning. IDEAL 2003. Lecture Notes in Computer Science, vol 2690. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45080-1_152
Download citation
DOI: https://doi.org/10.1007/978-3-540-45080-1_152
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40550-4
Online ISBN: 978-3-540-45080-1
eBook Packages: Springer Book Archive