Skip to main content
Log in

A Multi-User 3-D Virtual Environment with Interactive Collaboration and Shared Whiteboard Technologies

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

A multi-user 3-D virtual environment allows remote participants to have a transparent communication as if they are communicating face-to-face. The sense of presence in such an environment can be established by representing each participant with a vivid human-like character called an avatar. We review several immersive technologies, including directional sound, eye gaze, hand gestures, lip synchronization and facial expressions, that facilitates multimodal interaction among participants in the virtual environment using speech processing and animation techniques. Interactive collaboration can be further encouraged with the ability to share and manipulate 3-D objects in the virtual environment. A shared whiteboard makes it easy for participants in the virtual environment to convey their ideas graphically. We survey various kinds of capture devices used for providing the input for the shared whiteboard. Efficient storage of the whiteboard session and precise archival at a later time bring up interesting research topics in information retrieval.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Acer Inc., Tablet PC, http: //aac.acer.com/.

  2. Electronics Imaging Inc., E-Beam, http: //www.e-beam.com/.

  3. M. Escher, I. Panzic, and N. Magnenat-Thalmann, “Facial deformation from MPEG-4,” in Proceedings on Computer Animation'98, 1998.

  4. J.D. Foley, A. van Dam, S.K. Feiner, and J.F. Hughes, Computer Graphics: Principles and Practice, Addison-Wesley, 1990.

  5. P. Fua, “Face models from uncalibrated video sequences,” in Proc. CAPTECH'98, pp. 214–228, 1998.

  6. W.G. Gardner, “Transaural 3-D audio,” MIT Media Lab, July 1995.

  7. W.G. Gardner, “3-D audio and acoustic environment modeling,” Wave Arts, Inc., March 1999.

  8. W.G. Gardner and K.D. Martin, “HRTF measurements of a KEMAR,” J. Acoust. Soc., Vol. 97, 1995.

  9. ICQ, online chat software, http: //www.icq.com/.

  10. T. Kurihara and K. Arai, “A transformation method for modeling and animation of the human face from photographs,” in Proceedings of the Computer Animation'91, Springer-Verlag, Tokyo, pp. 45–58, 1991.

    Google Scholar 

  11. W.H. Leung and T. Chen, “Networked collaborative environment with animated 3D avatar,” IEEE Workshop on Multimedia Signal Processing, Los Angeles, California, Dec. 1998.

  12. W.H. Leung and T. Chen, “Creating a multiuser 3-D virtual environment,” IEEE Signal Processing Magazine, May 2001.

  13. W.H. Leung and T. Chen, User-Independent Retrieval of Free-Form Hand-Drawn Sketches, ICASSP2002, Orlando, Florida, May 2002.

  14. W.H. Leung and T. Chen, Trademark Retrieval Using Contour-Skeleton Stroke Classification, ICME2002, Lausanne, Switzerland, Aug. 2002.

  15. W.H. Leung and T. Chen, Retrieval of Sketches Based on Spatial Relation Between Strokes, ICIP2002, Rochester, New York, Sept. 2002.

  16. W.H. Leung, K. Goudeaux, S. Panichpapiboon, S.-B. Wang, and T. Chen, “Networked intelligent collaborative environment (NetICE),” IEEE Intl. Conf. on Multimedia and Expo., New York, July 2000.

  17. W.H. Leung, B. Tseng, Z.-Y. Shae, F. Hendriks, and T. Chen, “Realistic video avatar,” IEEE Intl. Conf. on Multimedia and Expo., New York, July 2000.

  18. H. McGurk and J. MacDonald, “Hearing lips and seeing voices,” Nature, pp. 746–748, Dec. 1976.

  19. Microsoft, NetMeeting 3, videoconferencing software, http: //www.microsoft.com/windows/netmeeting/.

  20. NTT Software Corporation, Interspace, 3-D virtual environment, http: //www.ntts.com/ispace.html.

  21. M. Okuda and T. Chen, “Joint geometry/texture progressive coding of 3-D models,” IEEE Intl. Conf. on Image Processing, Vancouver, Sept. 2000.

  22. F.I. Parke and K. Waters, Computer Facial Animation, A. K. Peters, 1996.

  23. F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. Salesin, “Synthesizing realistic facial expressions from photographs,” SIGGRAPH'98, 1998.

  24. Seiko Instruments USA Inc., SmartPad, http: //www.seikosmart.com/products/sp580.html.

  25. J.C. Tang and E.A. Isaacs, “Why do users like video? Studies of multimedia-supported collaboration,” Computer Supported Cooperative Work (CSCW), Vol. 1, pp. 163–193, 1992.

    Google Scholar 

  26. B. Tseng, Z.-Y. Shae, W.H. Leung, and T. Chen, “Immersive whiteboards in a networked collaborative environment,” IEEE Intl. Conf. on Multimedia and Expo., Tokyo, Aug. 2001.

  27. Virtual Ink Inc., Mimio, http: //www.mimio.com/.

  28. WACOM Technology co., graphire2 tablet, http: //www.wacom.com/graphire/.

  29. K. Waters, “A muscle model for animating three-dimensional facial expressions,” Computer Graphics (SIGGRAPH'87), Vol. 21, No. 4, pp. 17–24, 1987.

    Google Scholar 

  30. Web3D Consortium, the VRML/Web3D Repository, http: //www.web3d.org/vrml/vrml.htm.

  31. White Pine, CU-SeeMe, videoconferencing software, http: //www.wpine.com/Products/CU-SeeMe/.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Leung, W.H., Chen, T. A Multi-User 3-D Virtual Environment with Interactive Collaboration and Shared Whiteboard Technologies. Multimedia Tools and Applications 20, 7–23 (2003). https://doi.org/10.1023/A:1023466231968

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1023466231968

Navigation