Abstract
“Dive into the Movie (DIM)” is a name of project to aim to realize a world innovative entertainment system which can provide an immersion experience into the story by giving a chance to audience to share an impression with his family or friends by watching a movie in which all audience can participate in the story as movie casts. To realize this system, we are trying to model and capture the personal characteristics instantly and precisely in face, body, gait, hair and voice. All of the modeling, character synthesis, rendering and compositing processes have to be performed on real-time without any manual operation. In this paper, a novel entertainment system, Future Cast System (FCS), is introduced as a prototype of DIM. The first experimental trial demonstration of FCS was performed at the World Exposition 2005 in which 1,630,000 people have experienced this event during 6 months. And finally up-to-date DIM system to realize more realistic sensation is introduced.
Chapter PDF
Similar content being viewed by others
References
Fujimura, K., et al.: Multi-camera 3d modeling system to digitize human head and body. In: Proceedings of SPIE, pp. 40–47 (2001)
Wiskott, L., et al.: Face recognition and gender determination. In: FG 1999, pp. 92–97 (1999)
Kawade, M.: Vision-based face understanding technologies and applications. In: Proc. of 2002 International Symposium on Micromechatronics and Human Science, pp. 27–32 (2002)
Zhang, L., et al.: A fast and robust automatic face alignment system. In: ICCV 2005 (2005)
Moghaddam, B., Yang, M.: Gender classification with support vector machines. In: Proceedings of the fourth IEEE FG 2000, pp. 306–311 (2000)
Lian, H., et al.: Gender recognition using a min–max modular support vector machine. In: International Conference on Natural Computation, pp. 438–441 (2005)
Parke, F.I.: Computer generated animation of faces. In: ACM 1972: Proceedings of the ACM annual Conference (1972)
Lyons, M., et al.: Avatar creation using automatic face processing. In: Proceedings of the sixth ACM international conference on Multimedia, pp. 427–434 (1998)
Pighin, F., et al.: Synthesizing realistic facial expressions from photographs. In: SIGGRAPH 1998 (1998)
Blanz, V., Vetter, T.: A morphable model for the synthesis of 3d faces. In: Proceedings of SIGGRAPH 1999, pp. 187–194 (1999)
Blanz, V., et al.: Reanimating faces in images and video. In: Computer Graphics Forum 2003 (2003)
Ezzat, T., et al.: Trainable videorealistic speech animation. In: SIGGRAPH 2002, pp. 388–398 (2002)
Lee, Y., Terzopulus, D., Waters, K.: Realistic modeling for facial animation. In: Proceedings of SIGGRAPH 1995, pp. 55–62 (1995)
Choe, B., Lee, H., Ko, H.: Performance-driven muscle-based facial animation. In: Proceedings of Computer Animation, pp. 67–79 (2001)
Sifakis, E., Neverov, I., Fedkiw, R.: Automatic determination of facial muscle activations from sparse motion capture marker data, pp. 417–425 (2005)
DeCarlo, D., Metaxas, D.: The integration of optical flow and deformable models with applications to human face shape and motion estimation. In: CVPR 1996, pp. 231–238 (1996)
Joshi, P., Tien, W.C., Desbrun, M., Pighin, F.: Learning controls for blend shape based realistic facial animation. In: SCA 2003, pp. 187–192 (2003)
Xiang, J., Chai, Xiao, J., Hodgins, J.: Vision-based control of 3d facial animation. In: SCA 2003, pp. 193–206 (2003)
Vlasic, D., Brand, M., Pfister, H., Popovi, J.: Face transfer with multilinear models. ACM Transactions on Graphics 24(3), 426–433 (2005)
Noh, J.Y., Neumann, U.: Expression cloning. In: SIGGRAPH 2001, pp. 277–288 (2001)
Krishnaswamy, A., Baranoski, G.V.G.: A biophysically-based spectral model of light interaction with human skin. Comput. Graph. Forum, 331–340 (2004)
Debevec, P., Hawkins, T., Tchou, C., Duiker, H.P., Sarokin, W., Sagar, M.: Acquiring the reflectance field of a human face. In: SIGGRAPH 2000, pp. 145–156 (2000)
Borshukov, G., Lewis, J.P.: Realistic human face rendering for ”the matrix reloaded. In: ACM SIGGRAPH 2003 Technical Sketch (2003)
Sander, P.V., Gosselin, D., Mitchell, J.L.: Real-time skin rendering on graphics hardware. In: SIGGRAPH 2004, Sketches, p. 148 (2004)
Weyrich, T., et al.: Analysis of human faces using a measurement-based skin reflectance model. In: SIGGRAPH 2006, pp. 1013–1024 (2006)
Cavazza, M., Charles, F., Mead, S.J.: Interacting with virtual characters in interactive storytelling. In: Proceedings of the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems: part 1, pp. 318–325 (2002)
Cheok, A.D., Weihua, W., Yang, X., Prince, S., Wan, F.S., Billinghurst, I.M., Kato, H.: Interactive theatre experience in embodied + wearable mixed reality space. In: Proc. of ISMAR 2002, pp. 59–317 (2002)
Mateas, M., Stern, A.: Facade: An experiment in building a fully-realized interactive drama. In: Proceedings of Game Developers’ Conference: Game Design Track (2003)
Maejima, A., et al.: Realistic Facial Animation by Automatic Individual Head Modeling and Facial Muscle Adjustment. In: HCI International 2011, ID:1669 (to appera, 2011)
Makihara, Y., et al.: The Online Gait Measurement for Characteristic Gait Animation Synthesis. In: Proc.of HCI International 2011, ID:1768 (to appear, 2011)
Shin-ichi, K., et al.: Personalized Voice Assignment Techniques for Synchronized Scenario Speech. In: HCI International 2011, ID:2218 (to appear, 2011)
Mashita, T., et al.: ‘Measuring and Modeling of Multi-layered Subsurface Scattering for Human Skin. In: Proc.of HCI International 2011, ID:2219 (to appear, 2011)
Kondo, K., et al.: Providing Immersive Virtual Experience with First-person Perspective Omnidirectional Movies and Three Dimensional Sound Field. In: HCI International 2011, ID:1733 (to appear, 2011)
Enomoto, S., et al.: 3-D sound reproduction system for immersive environments based on the boundary surface control principle. In: Proc. HCI International 2011, ID: 1705 (to appear, 2011)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Morishima, S., Yagi, Y., Nakamura, S. (2011). Instant Movie Casting with Personality: Dive into the Movie System. In: Shumaker, R. (eds) Virtual and Mixed Reality - Systems and Applications. VMR 2011. Lecture Notes in Computer Science, vol 6774. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22024-1_21
Download citation
DOI: https://doi.org/10.1007/978-3-642-22024-1_21
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-22023-4
Online ISBN: 978-3-642-22024-1
eBook Packages: Computer ScienceComputer Science (R0)