Skip to main content

Virtual Reality Based Immersive Telepresence System for Remote Conversation and Collaboration

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10582))

Abstract

We developed a Virtual Reality (VR) based telepresence system providing novel immersive experience for remote conversation and collaboration. By wearing VR headsets, all the participants can be gathered into a same virtual space, with 3D cartoon Avatars representing them. The 3D VR Avatars can realistically emulate the head postures, facial expressions and hand motions of the participants, enabling them to conduct enjoyable group-to-group conversations with people spatially isolated from them. Moreover, our VR telepresence system offers conspicuously new manners for remote collaboration. For example, users can play PPT slides or watch videos together, or they can cooperate on solving a math problem by calculating on a virtual blackboard, all of which can be hardly achieved using conventional video-based telepresence system. Experiments show that our system can provide unprecedented immersive experience for tele-conversation and new possibilities for remote collaboration.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Otsuka, K.: MMSpace: kinetically-augmented telepresence for small group-to-group conversations. In: Virtual Reality (VR) 2016 IEEE, pp. 19–28. IEEE (2016)

    Google Scholar 

  2. Maimone, A., Fuchs, H.: Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras. In: International symposium on mixed and augmented reality (2011)

    Google Scholar 

  3. Zhang, C., Cai, Q., Chou, P.A., Zhang, Z., Martin-Brualla, R.: Viewport: a distributed, immersive teleconferencing system with infrared dot pattern. IEEE Multimedia 20(1), 17–27 (2013)

    Article  Google Scholar 

  4. Zhu, Z., Martin, R.R., Pepperell, R., Burleigh, A.: 3D modeling and motion parallax for improved videoconferencing. Comput. Visual Media 2(2), 131–142 (2016)

    Article  Google Scholar 

  5. Fairchild, A.J., Campion, S.P., García, A.S., Wolff, R., Fernando, T., Roberts, D.J.: A mixed reality telepresence system for collaborative space operation. IEEE Trans. Circuits Syst. Video Technol. 27(4), 814–827 (2017)

    Google Scholar 

  6. Vasudevan, R., Zhou, Z., Kurillo, G., Lobaton, E., Bajcsy, R., Nahrstedt, K.: Real-time stereo-vision system for 3D teleimmersive collaboration. In: International Conference on Multimedia and Expo (2010)

    Google Scholar 

  7. Higuchi, K., Chen, Y., Chou, P. A., Zhang, Z., Liu, Z.: Immerseboard: immersive telepresence experience using a digital whiteboard. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2383–2392. ACM (2015)

    Google Scholar 

  8. Ichim, A., Bouaziz, S., Pauly, M.: Dynamic 3D avatar creation from hand-held video input. Int. Conf. Comput. Graph. Interact. Tech. 34(4), 45:1–45:14 (2015)

    Google Scholar 

  9. Cao, C., Weng, Y., Lin, S., Zhou, K.: 3D shape regression for real-time facial animation. ACM Trans. Graph. 32(4), 41:1–41:10 (2013)

    Article  MATH  Google Scholar 

  10. Edwards, P., Landreth, C., Fiume, E., Singh, K.: JALI: an animator-centric viseme model for expressive lip synchronization. ACM Trans. Graph. (TOG) 35(4), 127 (2016)

    Article  Google Scholar 

  11. Gao, Z., Yu, Y., Zhou, Y., Du, S.: Leveraging two kinect sensors for accurate full-body motion capture. Sensors 15(9), 24297–24317 (2015)

    Article  Google Scholar 

  12. Fang, B., Sun, F., Liu, H., Guo, D.: A novel data glove using inertial and magnetic sensors for motion capture and robotic arm-hand teleoperation. Ind. Robot Int. J. 44(2), 155–165 (2017)

    Article  Google Scholar 

  13. Bradski, G.: Opencv Libr. Doct. Dobbs J. 25(11), 120–126 (2000)

    Google Scholar 

  14. Xiong, X., De la Torre, F.: Supervised descent method and its applications to face alignment. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 532–539 (2013)

    Google Scholar 

  15. Xie, N., Yuan, T., Chen, N., Zhou, X., Wang, Y., Zhang, X.: Rapid DCT-based LipSync generation algorithm for game making. In: SIGGRAPH ASIA 2016 Posters, p. 2. ACM (2016)

    Google Scholar 

  16. Hoon, L., Chai, W., Rahman, K.: Development of real-time lip sync animation framework based on viseme human speech. Arch. Des. Res. 27(4), 19–29 (2014)

    Google Scholar 

  17. Aristidou, A., Lasenby, J.: FABRIK: a fast, iterative solver for the inverse kinematics problem. Graph. Models 73(5), 243–260 (2011)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by Research Grant of Beijing Higher Institution Engineering Research Center and the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (MC-IRSES, grant No. 612627).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kun Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tan, Z., Hu, Y., Xu, K. (2017). Virtual Reality Based Immersive Telepresence System for Remote Conversation and Collaboration. In: Chang, J., Zhang, J., Magnenat Thalmann, N., Hu, SM., Tong, R., Wang, W. (eds) Next Generation Computer Animation Techniques. AniNex 2017. Lecture Notes in Computer Science(), vol 10582. Springer, Cham. https://doi.org/10.1007/978-3-319-69487-0_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-69487-0_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-69486-3

  • Online ISBN: 978-3-319-69487-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics