ABSTRACT
Human-device interactions in smart environments are shifting prominently towards naturalistic user interactions such as gaze and gesture. However, ambiguities arise when users have to switch interactions as contexts change. This could confuse users who are accustomed to a set of conventional controls leading to system inefficiencies. My research explores how to reduce interaction ambiguity by semantically modelling user specific interactions with context, enabling personalised interactions through AR. Sensory data captured from an AR device is utilised to interpret user interactions and context which is then modeled in an extendable knowledge graph along with user's interaction preference using semantic web standards. These representations are utilized to bring semantics to AR applications about user's intent to interact with a particular device affordance. Therefore, this research aims to bring semantical modeling of personalised gesture interactions in AR/VR applications for smart/immersive environments.
Supplemental Material
- Hakan Altinpulluk. 2019. Determining the trends of using augmented reality in education between 2006--2016. Education and Information Technologies 24, 2 (2019), 1089--1114.Google ScholarDigital Library
- Long Chen, Wen Tang, Nigel John, Tao Ruan Wan, and Jian Jun Zhang. 2018. Context-aware mixed reality: A framework for ubiquitous interaction. arXiv preprint arXiv:1803.05541 (2018).Google Scholar
- Eunjung Choi, Heejin Kim, and Min K Chung. 2014. A taxonomy and notation method for three-dimensional hand gestures. International Journal of Industrial Ergonomics 44, 1 (2014), 171--188.Google ScholarCross Ref
- Jakub Floty'ski, Adrian Nowak, and Krzysztof Walczak. 2018. Explorable representation of interaction in VR/AR environments. In International Conference on Augmented Reality, Virtual Reality and Computer Graphics. Springer, 589--609.Google ScholarCross Ref
- Juan Garzón, Juan Pavón, and Silvia Baldiris. 2019. Systematic review and meta-analysis of augmented reality in educational settings. Virtual Reality 23, 4 (2019), 447--459.Google ScholarDigital Library
- Armin Haller, Krzysztof Janowicz, Simon J. D. Cox, Maxime Lefrançois, Kerry Taylor, Danh Le Phuoc, Joshua Lieberman, Raúl García-Castro, Rob Atkinson, and Claus Stadler. 2019. The modular SSN ontology: A joint W3C and OGC standard specifying the semantics of sensors, observations, sampling, and actuation. Semantic Web 10, 1 (2019), 9--32. https://doi.org/10.3233/SW-180320.Google ScholarDigital Library
- Ramón Hervás, Alberto Garcia-Lillo, and José Bravo. 2011. Mobile augmented reality based on the semantic web applied to ambient assisted living. In International workshop on ambient assisted living. Springer, 17--24.Google ScholarCross Ref
- Dongsik Jo and Gerard Jounghyun Kim. 2016. In-situ AR manuals for IoT appliances. In 2016 IEEE international conference on consumer electronics (ICCE). IEEE, 409--410.Google ScholarCross Ref
- Wan Khairunizam, K Ikram, SA Bakar, ZM Razlan, and I Zunaidi. 2018. Ontological Framework of Arm Gesture Information for the Human Upper Body. In Intelligent Manufacturing & Mechatronics. Springer, 507--515.Google Scholar
- Inta Kotane, Daina Znotina, and Serhii Hushko. 2019. Assessment of trends in the application of digital marketing. Scientific Journal of Polonia University 33, 2 (2019), 28--35.Google ScholarCross Ref
- Asterios Leonidis, Maria Korozi, George Margetis, Dimitris Grammenos, and Constantine Stephanidis. 2013. An intelligent hotel room. In International Joint Conference on Ambient Intelligence. Springer, 241--246.Google ScholarCross Ref
- Teemu Leppänen, Arto Heikkinen, Antti Karhu, Erkki Harjula, Jukka Riekki, and Timo Koskela. 2014. Augmented reality web applications with mobile agents in the internet of things. In 2014 Eighth International Conference on Next Generation Mobile Apps, Services and Technologies. IEEE, 54--59.Google ScholarDigital Library
- James Manyika. 2015. The Internet of Things: Mapping the value beyond the hype. McKinsey Global Institute.Google Scholar
- Sushmita Mitra and Tinku Acharya. 2007. Gesture recognition: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 37, 3 (2007), 311--324.Google ScholarDigital Library
- Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, MC Schraefel, and Jacob O Wobbrock. 2014. Reducing legacy bias in gesture elicitation studies. interactions 21, 3 (2014), 40--45.Google Scholar
- Oracle. 2017. Hotel 2025 emerging technologies destined to reshape our business. https://www.oracle.com/webfolder/s/delivery_production/ docs/FY16h1/doc31/Hotels-2025-v5a.pdfGoogle Scholar
- Mehdi Ousmer, Jean Vanderdonckt, and Sabin Buraga. 2019. An ontology for reasoning on body-based gestures. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems. 1--6.Google ScholarDigital Library
- Madhawa Perera, Armin Haller, and Matt Adcock. 2019. A Roadmap for Semantically-Enabled Human Device Interactions.. In SAWSemStats@ ISWC.Google Scholar
- Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-defined gestures for augmented reality. In IFIP Conference on Human-Computer Interaction. Springer, 282--299.Google ScholarDigital Library
- Dariusz Rumi'ski and Krzysztof Walczak. 2017. Semantic model for distributed augmented reality services. In Proceedings of the 22nd International Conference on 3D Web Technology. 1--9.Google ScholarDigital Library
- Dariusz Rumi'ski and Krzysztof Walczak. 2020. Large-scale distributed semantic augmented reality services--A performance evaluation. Graphical Models 107 (2020), 101027.Google ScholarCross Ref
- Adriano Scoditti, Renaud Blanch, and Joëlle Coutaz. 2011. A novel taxonomy for gestural interaction techniques based on accelerometers. In Proceedings of the 16th international conference on Intelligent user interfaces. 63--72.Google ScholarDigital Library
- Constantine Stephanidis, Gavriel Salvendy, Margherita Antona, Jessie YC Chen, Jianming Dong, Vincent G Duffy, Xiaowen Fang, Cali Fidopiastis, Gino Fragomeni, Limin Paul Fu, et al. 2019. Seven HCI grand challenges. International Journal of Human--Computer Interaction 35, 14 (2019), 1229--1269.Google ScholarCross Ref
- Yongbin Sun, Alexandre Armengol-Urpi, Sai Nithin Reddy Kantareddy, Joshua Siegel, and Sanjay Sarma. 2019. Magichand: Interact with iot devices in augmented reality environment. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 1738--1743.Google ScholarCross Ref
- Theophanis Tsandilas. 2018. Fallacies of agreement: A critical review of consensus assessment methods for gesture elicitation. ACM Transactions on Computer-Human Interaction (TOCHI) 25, 3 (2018), 1--49.Google ScholarDigital Library
- Ahmed Mohmmad Ullah, Md Rashedul Islam, Sayeda Farzana Aktar, and SK Alamgir Hossain. 2012. Remote-touch: Augmented reality based marker tracking for smart home control. In 2012 15th International conference on computer and information technology (ICCIT). IEEE, 473--477.Google ScholarCross Ref
- Radu-Daniel Vatavu and Jacob O Wobbrock. 2015. Formalizing agreement analysis for elicitation studies: new measures, significance test, and toolkit. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 1325--1334.Google ScholarDigital Library
- Santiago Villarreal-Narvaez, Jean Vanderdonckt, Radu-Daniel Vatavu, and Jacob A Wobbrock. 2020. A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies. In Proceedings of ACM Int. Conf. on Designing Interactive Systems (DIS'20).Google ScholarDigital Library
- Krzysztof Walczak, Dariusz Rumi'ski, and Jakub Floty'ski. 2014. Building contextual augmented reality environments with semantics. In 2014 International Conference on Virtual Systems & Multimedia (VSMM). IEEE, 353--361.Google ScholarCross Ref
- Jacob O Wobbrock, Htet Htet Aung, Brandon Rothrock, and Brad A Myers. 2005. Maximizing the guessability of symbolic input. In CHI'05 extended abstracts on Human Factors in Computing Systems. 1869--1872.Google ScholarDigital Library
- Jacob O Wobbrock, Meredith Ringel Morris, and Andrew D Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI conference on human factors in computing systems. 1083--1092.Google ScholarDigital Library
- LI Yang, Jin HUANG, TIAN Feng, WANG Hong-An, and DAI GuoZhong. 2019. Gesture interaction in virtual reality. Virtual Reality & Intelligent Hardware 1, 1 (2019), 84--112.Google ScholarCross Ref
- Thomas Zachariah and Prabal Dutta. 2019. Browsing the web of things in mobile augmented reality. In Proceedings of the 20th International Workshop on Mobile Computing Systems and Applications. 129--134.Google ScholarDigital Library
- J Zhu, Soh-Khim Ong, and Andrew YC Nee. 2015. A context-aware augmented reality assisted maintenance system. International Journal of Computer Integrated Manufacturing 28, 2 (2015), 213--225.Google ScholarDigital Library
Index Terms
- Personalised Human Device Interaction through Context aware Augmented Reality
Recommendations
Occlusion based interaction methods for tangible augmented reality environments
VRCAI '04: Proceedings of the 2004 ACM SIGGRAPH international conference on Virtual Reality continuum and its applications in industryTraditional Tangible Augmented Reality (Tangible AR) interfaces combine a mixture of tangible user interface and augmented reality technology, complementing each other for novel interaction methods and real world anchored visualization. However, well ...
Multi-scale gestural interaction for augmented reality
SA '17: SIGGRAPH Asia 2017 Mobile Graphics & Interactive ApplicationsWe present a multi-scale gestural interface for augmented reality applications. With virtual objects, gestural interactions such as pointing and grasping can be convenient and intuitive, however they are imprecise, socially awkward, and susceptible to ...
3D gesture interaction for handheld augmented reality
SA '14: SIGGRAPH Asia 2014 Mobile Graphics and Interactive ApplicationsIn this paper, we present a prototype for exploring natural gesture interaction with Handheld Augmented Reality (HAR) applications, using visual tracking based AR and freehand gesture based interaction detected by a depth camera. We evaluated this ...
Comments