ABSTRACT
The use of videoconferencing in the workplace has been steadily growing. While multitasking during video conferencing is often necessary, it is also viewed as impolite and sometimes unacceptable. One potential contributor to negative attitudes towards such multitasking is the disrupted sense of eye contact that occurs when an individual shifts their gaze away to another screen, for example, in a dual-monitor setup, common in office settings. We present an approach to improve a sense of eye contact over videoconferencing in dual-monitor setups. Our approach uses computer vision and desktop activity detection to dynamically choose the camera with the best view of a user's face. We describe two alternative implementations of our solution (RGB-only, and a combination of RGB and RGB-D cameras). We then describe results from an online experiment that shows the potential of our approach to significantly improve perceptions of a person's politeness and engagement in the meeting.
- Ernst Bekkering and J.P. Shim. 2006. Trust in Videoconferencing. Communications of the ACM 49, 7: 103--107. Google ScholarDigital Library
- Cisco Project Workplace: http://www.cisco.com/c/dam/assets/sol/tp/projectworkplaceGoogle Scholar
- Owen Daly-Jones, Andrew Monk, and Leon Watts. 1998. Some advantages of video conferencing over high-quality audio conferencing: fluency and awareness of attentional focus. International Journal of Human-Computer Studies 49, 1: 21--58. Google ScholarDigital Library
- Wei Dong and Wai-Tat Fu. 2012. One piece at a time: why video-based communication is better for negotiation and conflict resolution. Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, ACM, 167--176. Google ScholarDigital Library
- Jim Gemmell, Kentaro Toyama, C. Lawrence Zitnick, Thomas Kang, and Steven Seitz. 2000. Gaze awareness for video-conferencing: A software approach. IEEE Multimedia 7, 4: 26--35. Google ScholarDigital Library
- Dominik Giger, Jean-Charles Bazin, Claudia Kuster, Tiberiu Popa, and Markus Gross. 2014. Gaze correction with a single webcam. Multimedia and Expo (ICME), 2014 IEEE International Conference on, IEEE, 1--6.Google ScholarCross Ref
- Jennifer Marlow, Eveline van Everdingen, and Daniel Avrahami. Taking Notes or Playing Games? Understanding Multitasking in Video Communication. Proceedings of CSCW 2016, ACM, to appear. Google ScholarDigital Library
- Yukiko I. Nakano and Ryo Ishii. 2010. Estimating user's engagement from eye-gaze behaviors in humanagent conversations. Proceedings of the 15th international conference on Intelligent User Interfaces, ACM, 139--148. Google ScholarDigital Library
- Abhishek Ranjan, Rorik Henrikson, Jeremy Birnholtz, Ravin Balakrishnan, and Dana Lee. 2010. Automatic camera control using unobtrusive vision and audio tracking. Proceedings of Graphics Interface 2010, Canadian Information Processing Society, 47--54. Google ScholarDigital Library
- Gianluca Schiavo, Alessandro Cappelletti, Eleonora Mencarini, Oliviero Stock, and Massimo Zancanaro. 2014. Overt or Subtle? Supporting Group Conversations with Automatically Targeted Directives. Proceedings of the 19th International Conference on Intelligent User Interfaces, ACM, 225--234. Google ScholarDigital Library
- Fumiharu Tomiyasu and Kenji Mase. 2015. HumanMachine Cooperative Viewing System for Wide-angle Multi-view Videos. Proceedings of the 20th International Conference on Intelligent User Interfaces Companion, ACM, 85--88. Google ScholarDigital Library
- Roel Vertegaal, Ivo Weevers, Changuk Sohn, and Chris Cheung. 2003. GAZE-2: conveying eye contact in group video conferencing using eye-controlled camera direction. Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, 521--528. Google ScholarDigital Library
- Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 143--146. Google ScholarDigital Library
Index Terms
- Supporting Multitasking in Video Conferencing using Gaze Tracking and On-Screen Activity Detection
Recommendations
Mobile 3D Gaze Tracking Calibration
CRV '15: Proceedings of the 2015 12th Conference on Computer and Robot VisionWe present a new calibration method to combine a mobile eye tracker with an external tracking system to obtain a 3D gaze vector. Our method captures calibration points of varying distances, pupil positions and head positions/orientations. With these ...
Exploring trust in group-to-group video-conferencing
CHI EA '11: CHI '11 Extended Abstracts on Human Factors in Computing SystemsPrevious work has shown that supporting trust via computer-mediated communication can be a challenge, especially among strangers. In this paper, we report on an experiment comparing two group-to-group video-conferencing environments and face-to-face ...
Pupil detection and gaze tracking using a deformable template
AbstractThis paper suggests a method for tracking gaze of a person at a distance around 2 m, using a single pan-tilt-zoom (PTZ) camera. In the suggested method, images are acquired for gaze tracking by turning the camera to the wide angle mode, or the ...
Comments