ABSTRACT
We discuss engineering aspects for shifting from “do you see what I see?” applications that stream the user’s field of view to remote viewers toward “do you control what I see?” features in which remote viewers are given the opportunity and tool to control the primary user’s field of view. To this end, we present two applications for (1) smartglasses with embedded video camera for live video streaming and (2) the HoloLens HMD that presents users with mediated versions of the visual world controlled by remote viewers.
- Adobe. [n.d.]. Video File Format Specification. https://www.adobe.com/content/dam/acom/en/devnet/flv/video_file_format_spec_v10.pdf Accessed June 2020.Google Scholar
- Adrian Aiordăchioae, Daniel Furtună, and Radu-Daniel Vatavu. 2020. Aggregating Life Tags for Opportunistic Crowdsensing with Mobile and Smartglasses Users. In Proceedings of the 6th EAI International Conference on Smart Objects and Technologies for Social Good (Antwerp, Belgium) (GoodTechs ’20). ACM, New York, NY, USA, 66–71. https://doi.org/10.1145/3411170.3411237Google ScholarDigital Library
- Adrian Aiordăchioae and Radu-Daniel Vatavu. 2019. Life-Tags: A Smartglasses-Based System for Recording and Abstracting Life with Tag Clouds. Proc. ACM Hum.-Comput. Interact. 3, EICS, Article 15(2019). https://doi.org/10.1145/3331157Google ScholarDigital Library
- Adrian Aiordăchioae, Radu-Daniel Vatavu, and Dorin-Mircea Popovici. 2019. A Design Space for Vehicular Lifelogging to Support Creation of Digital Content in Connected Cars. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems (Valencia, Spain) (EICS ’19). Association for Computing Machinery, New York, NY, USA, Article 9, 6 pages. https://doi.org/10.1145/3319499.3328234Google ScholarDigital Library
- Alex Cambose. 2018. Webcam Live Streaming with WebSockets and Base64. https://medium.com/@alexcambose/webcam-live-streaming-with-websockets-and-base64-64b1b4992db8Google Scholar
- eSight. [n.d.]. Electronic Glasses for the Legally Blind. https://www.esighteyewear.eu/Google Scholar
- M.R. Everingham, B.T. Thomas, and T. Troscianko. 1998. Head-Mounted Mobility Aid for Low Vision Using Scene Classification Techniques. The Int. Journal of Virtual Reality 3, 4 (1998), 1–10. http://dx.doi.org/10.20870/IJVR.1998.3.4.2629Google ScholarCross Ref
- Anhong Guo, Xiang “Anthony” Chen, Haoran Qi, Samuel White, Suman Ghosh, Chieko Asakawa, and Jeffrey P. Bigham. 2016. VizLens: A Robust and Interactive Screen Reader for Interfaces in the Real World. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology(Tokyo, Japan) (UIST ’16). ACM, New York, NY, USA, 651–664. https://doi.org/10.1145/2984511.2984518Google ScholarDigital Library
- J. Aaron Hipp, Deepti Adlakha, Rebecca Gernes, Agata Kargol, and Robert Pless. 2013. Do You See What I See: Crowdsource Annotation of Captured Scenes. In Proc. of the 4th Int. SenseCam & Pervasive Imaging Conference (San Diego, California, USA) (SenseCam ’13). ACM, New York, NY, USA, 24–25. https://doi.org/10.1145/2526667.2526671Google ScholarDigital Library
- J. Huang, M. Kinateder, M. Dunn, W. Jarosz, X. Yang, E. Cooper, and J. Haddad. 2019. An Augmented Reality Sign-Reading Assistant for Users with Reduced Vision. PLOS ONE 14, 1 (2019). http://dx.doi.org/10.1371/journal.pone.0210630Google Scholar
- A.D. Hwang and E. Peli. 2014. An Augmented-Reality Edge Enhancement Application for Google Glass. Optometry & Vision Science 91, 8 (2014), 1021–1030. http://dx.doi.org/10.1097/OPX.0000000000000326Google ScholarCross Ref
- Shinjae Kang, Byungjo Kim, Sangrok Han, and Hyogon Kim. 2015. Do You See What I See: Towards a Gaze-Based Surroundings Query Processing System. In Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications(Nottingham, United Kingdom) (AutomotiveUI ’15). Association for Computing Machinery, New York, NY, USA, 93–100. https://doi.org/10.1145/2799250.2799285Google ScholarDigital Library
- Shunichi Kasahara, Shohei Nagai, and Jun Rekimoto. 2014. LiveSphere: Immersive Experience Sharing with 360 Degrees Head-Mounted Cameras. In Proceedings of the Adjunct Publication of the 27th Annual ACM Symposium on User Interface Software and Technology (Honolulu, Hawaii, USA) (UIST’14 Adjunct). Association for Computing Machinery, New York, NY, USA, 61–62. https://doi.org/10.1145/2658779.2659114Google ScholarDigital Library
- Shunichi Kasahara, Shohei Nagai, and Jun Rekimoto. 2015. First Person Omnidirectional Video: System Design and Implications for Immersive Experience. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (Brussels, Belgium) (TVX ’15). Association for Computing Machinery, New York, NY, USA, 33–42. https://doi.org/10.1145/2745197.2745202Google ScholarDigital Library
- MeCam. [n.d.]. High definition video camera | Best life logging device | Mini video camera - MeCam.https://mecam.me/products/mecam-hdGoogle Scholar
- P. Melillo, D. Riccio, L. Di Perna, G. Sanniti Di Baja, M. De Nino, S. Rossi, F. Testa, F. Simonelli, and M. Frucci. 2017. Wearable Improved Vision System for Color Vision Deficiency Correction. IEEE Journal of Translational Engineering in Health and Medicine 5 (2017), 1–7. http://dx.doi.org/10.1109/JTEHM.2017.2679746Google ScholarCross Ref
- R. Pantos. [n.d.]. HTTP Live Streaming. https://datatracker.ietf.org/doc/rfc8216/Google Scholar
- Eli Peli, Gang Luo, Alex Bowers, and Noa Rensing. 2007. Applications of Augmented-Vision Head-Mounted Systems in Vision Rehabilitation. Journal of the Society for Information Display 15, 12 (2007), 1037–1045. https://onlinelibrary.wiley.com/doi/abs/10.1889/1.2825088Google ScholarCross Ref
- E. Peli, G. Luo, A. Bowers, and N. Rensing. 2009. Development and Evaluation of Vision Multiplexing Devices for Vision Impairments. International Journal of Artificial Intelligence Tools : Architectures, Languages, Algorithms 18, 3 (2009), 365–378. https://doi.org/10.1142/S0218213009000184Google Scholar
- Asreen Rostami, Emma Bexell, and Stefan Stanisic. 2018. The Shared Individual. In Proceedings of the 12th International Conference on Tangible, Embedded, and Embodied Interaction (Stockholm, Sweden) (TEI ’18). ACM, New York, NY, USA, 511–516. https://doi.org/10.1145/3173225.3173299Google ScholarDigital Library
- Petruţa-Paraschiva Rusu, Maria-Doina Schipor, and Radu-Daniel Vatavu. 2019. A Lead-In Study on Well-Being, Visual Functioning, and Desires for Augmented Reality Assisted Vision for People with Visual Impairments. In Proceedings of the 7th IEEE International Conference on e-Health and Bioengineering(EHB ’19). http://dx.doi.org/10.1109/EHB47216.2019.8970074Google ScholarCross Ref
- Sanp.[n.d.]. Spectacles by Snapchat | Your Hands-Free Camera.https://www.spectacles.com/ Last accessed June 2020.Google Scholar
- Ovidiu-Andrei Schipor, Radu-Daniel Vatavu, and Wenjun Wu. 2019. SAPIENS: Towards Software Architecture to Support Peripheral Interaction in Smart Environments. Proc. ACM Hum.-Comput. Interact. 3, EICS, Article 11 (June 2019), 24 pages. https://doi.org/10.1145/3331153Google ScholarDigital Library
- Lee Stearns, Victor DeSouza, Jessica Yin, Leah Findlater, and Jon E. Froehlich. 2017. Augmented Reality Magnification for Low Vision Users With the Microsoft HoloLens and a Finger-Worn Camera. In Proc. of the 19th Int. ACM SIGACCESS Conference on Computers and Accessibility (Baltimore, Maryland, USA) (ASSETS ’17). ACM, New York, NY, USA, 361–362. https://doi.org/10.1145/3132525.3134812Google ScholarDigital Library
- Enrico Tanuwidjaja, Derek Huynh, Kirsten Koa, Calvin Nguyen, Churen Shao, Patrick Torbett, Colleen Emmenegger, and Nadir Weibel. 2014. Chroma: A Wearable Augmented-Reality Solution for Color Blindness. In Proc. of the 2014 ACM Int. Joint Conf. on Pervasive and Ubiquitous Computing (Seattle, Washington) (UbiComp ’14). ACM, New York, NY, USA, 799–810. https://doi.org/10.1145/2632048.2632091Google ScholarDigital Library
- M. Iftekhar Tanveer, A. S. M. Iftekhar Anam, Mohammed Yeasin, and Majid Khan. 2013. Do You See What I See? Designing a Sensory Substitution Device to Access Non-Verbal Modes of Communication. In Proc. of the 15th Int. ACM SIGACCESS Conf. on Computers and Accessibility (Bellevue, Washington) (ASSETS ’13). ACM, New York, NY, USA, Article 10, 8 pages. https://doi.org/10.1145/2513383.2513438Google ScholarDigital Library
- Chia-Chi Teng, Brady Redfearn, Craig Nuttall, Sabrina Jarvis, James Carr, Jarin Jensen, Sandy Kanuch, Jordon Peterson, and David Taylor. 2019. Mixed Reality Patients Monitoring Application for Critical Care Nurses. In Proceedings of the Third International Conference on Medical and Health Informatics 2019 (Xiamen, China) (ICMHI 2019). ACM, New York, NY, USA, 49–53. https://doi.org/10.1145/3340037.3340050Google ScholarDigital Library
- Bruce H. Thomas. 2012. A Survey of Visual, Mixed, and Augmented Reality Gaming. Comput. Entertain. 10, 1, Article 3 (Dec. 2012), 33 pages. https://doi.org/10.1145/2381876.2381879Google ScholarDigital Library
- Jean Vanderdonckt and Radu-Daniel Vatavu. 2020. A Pen User Interface for Controlling a Virtual Puppet. In Companion Proceedings of the 12th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (Sophia Antipolis, France) (EICS ’20 Companion). Association for Computing Machinery, New York, NY, USA, Article 6, 6 pages. https://doi.org/10.1145/3393672.3398637Google ScholarDigital Library
- Radu-Daniel Vatavu. 2020. Connecting Research from Assistive Vision and Smart Eyewear Computing with Crisis Management and Mitigation Systems: A Position Paper. Romanian J. of Information Science and Technology 23 (2020), S29–S39.Google Scholar
- Radu-Daniel Vatavu, Stefan-Gheorghe Pentiuc, and Tudor Ioan Cerlinca. 2007. Bringing Context into Play: Supporting Game Interaction through Real-Time Context Acquisition. In Proceedings of the 2007 Workshop on Multimodal Interfaces in Semantic Interaction (Nagoya, Japan) (WMISI ’07). ACM, New York, NY, USA, 3–8. https://doi.org/10.1145/1330572.1330573Google ScholarDigital Library
- Yeping Wang, Gopika Ajaykumar, and Chien-Ming Huang. 2020. See What I See: Enabling User-Centric Robotic Assistance Using First-Person Demonstrations. In Proc. of the 2020 ACM/IEEE Int. Conf. on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). ACM, New York, NY, USA, 639–648. https://doi.org/10.1145/3319502.3374820Google ScholarDigital Library
- MPEG | The Moving Picture Experts Group website. [n.d.]. MPEG-DASH. https://mpeg.chiariglione.org/standards/mpeg-dashGoogle Scholar
- WOWZA. 2019. Video Streaming Latency Report. https://www.wowza.com/wp-content/uploads/Streaming-Video-Latency-Report-Interactive-2019.pdfGoogle Scholar
- Yuhang Zhao, Edward Cutrell, Christian Holz, Meredith Ringel Morris, Eyal Ofek, and Andrew D. Wilson. 2019. SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People With Low Vision. In Proc. of the 2019 CHI Conf. on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). ACM, New York, NY, USA, Article 111, 14 pages. https://doi.org/10.1145/3290605.3300341Google ScholarDigital Library
- Yuhang Zhao, Elizabeth Kupferstein, Brenda Veronica Castro, Steven Feiner, and Shiri Azenkot. 2019. Designing AR Visualizations to Facilitate Stair Navigation for People With Low Vision. In Proc. of the 32nd Annual ACM Symp. on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). ACM, New York, NY, USA, 387–402. https://doi.org/10.1145/3332165.3347906Google ScholarDigital Library
- Yuhang Zhao, Sarit Szpiro, and Shiri Azenkot. 2015. ForeSee: A Customizable Head-Mounted Vision Enhancement System for People with Low Vision. In Proc. of the 17th Int. ACM SIGACCESS Conf. on Computers & Accessibility (Lisbon, Portugal) (ASSETS ’15). ACM, New York, NY, USA, 239–249. http://dx.doi.org/10.1145/2700648.2809865Google ScholarDigital Library
- Yuhang Zhao, Sarit Szpiro, Jonathan Knighten, and Shiri Azenkot. 2016. CueSee: Exploring Visual Cues for People With Low Vision to Facilitate a Visual Search Task. In Proc. of the 2016 ACM Int. Joint Conf. on Pervasive and Ubiquitous Computing (Heidelberg, Germany) (UbiComp ’16). ACM, 73–84. https://doi.org/10.1145/2971648.2971730Google ScholarDigital Library
Recommendations
Distance Perception with a Video See-Through Head-Mounted Display
CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing SystemsIn recent years, pass-through cameras have resurfaced as inclusions for virtual reality (VR) hardware. With modern cameras that now have increased resolution and frame rate, Video See-Through (VST) Head-Mounted Displays (HMD) can be used to provide an ...
FlexiSee: flexible configuration, customization, and control of mediated and augmented vision for users of smart eyewear devices
AbstractSmart eyewear and Augmented Reality technology have been examined closely in the scientific community to provide vision rehabilitation to people with visual impairments as well as augmented vision to people with and without visual impairments for ...
See-Through Hand
OZCHI '98: Proceedings of the Australasian Conference on Computer Human InteractionThis paper proposes a new concept of a user-interface device in VR environments. In a virtual environment, it is very important for users to directly manipulate the virtual objects. However, we do not have to introduce the negative features of real ...
Comments