skip to main content
10.1145/3379337.3415815acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications

Published: 20 October 2020 Publication History

Abstract

Recognition of human behavior plays an important role in context-aware applications. However, it is still a challenge for end-users to build personalized applications that accurately recognize their own activities. Therefore, we present CAPturAR, an in-situ programming tool that supports users to rapidly author context-aware applications by referring to their previous activities. We customize an AR head-mounted device with multiple camera systems that allow for non-intrusive capturing of user's daily activities. During authoring, we reconstruct the captured data in AR with an animated avatar and use virtual icons to represent the surrounding environment. With our visual programming interface, users create human-centered rules for the applications and experience them instantly in AR. We further demonstrate four use cases enabled by CAPturAR. Also, we verify the effectiveness of the AR-HMD and the authoring workflow with a system evaluation using our prototype. Moreover, we conduct a remote user study in an AR simulator to evaluate the usability.

Supplementary Material

VTT File (3379337.3415815.vtt)
MP4 File (ufp1798pv.mp4)
Preview video
MP4 File (ufp1798vf.mp4)
Video figure
MP4 File (3379337.3415815.mp4)
Presentation Video

References

[1]
2019. Oculus. (2019). https://www.oculus.com/.
[2]
Mohammad Abu Alsheikh, Ahmed Selim, Dusit Niyato, Linda Doyle, Shaowei Lin, and Hwee-Pink Tan. 2016. Deep activity recognition models with triaxial accelerometers. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence.
[3]
Fraser Anderson, Tovi Grossman, and George Fitzmaurice. 2017. Trigger-Action-Circuits: Leveraging Generative Design to Enable Novices to Design and Build Circuitry. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. 331--342.
[4]
Daniel Ashbrook and Thad Starner. 2003. Using GPS to learn significant locations and predict movement across multiple users. Personal and Ubiquitous computing 7, 5 (2003), 275--286.
[5]
Daniel Ashbrook and Thad Starner. 2010. MAGIC: a motion gesture design tool. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2159--2168.
[6]
Microsoft Azure. 2020. Azure Kinect DK. (2020). Retrieved April 20, 2020 from https://azure.microsoft.com/en-us/services/kinect-dk/.
[7]
Chris Beckmann and Anind Dey. 2003. Siteview: Tangibly programming active environments with predictive visualization. In adjunct Proceedings of UbiComp. 167--168.
[8]
Julia Brich, Marcel Walch, Michael Rietzler, Michael Weber, and Florian Schaub. 2017. Exploring end user programming needs in home automation. ACM Transactions on Computer-Human Interaction (TOCHI) 24, 2 (2017), 1--35.
[9]
Michael Buettner, Richa Prasad, Matthai Philipose, and David Wetherall. 2009. Recognizing daily activities with RFID-based sensors. In Proceedings of the 11th international conference on Ubiquitous computing. 51--60.
[10]
Yuanzhi Cao, Xun Qian, Tianyi Wang, Rachel Lee, Ke Huo, and Karthik Ramani. 2020. An Exploratory Study of Augmented Reality Presence for Tutoring Machine Tasks. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13.
[11]
Yuanzhi Cao, Tianyi Wang, Xun Qian, Pawan S Rao, Manav Wadhawan, Ke Huo, and Karthik Ramani. 2019. GhostAR: A Time-space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 521--534.
[12]
Z. Cao, G. Hidalgo Martinez, T. Simon, S. Wei, and Y. A. Sheikh. 2019. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence (2019).
[13]
Liming Chen, Jesse Hoey, Chris D Nugent, Diane J Cook, and Zhiwen Yu. 2012. Sensor-based activity recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, 6 (2012), 790--808.
[14]
Kai-Yin Cheng, Rong-Hao Liang, Bing-Yu Chen, Rung-Huei Laing, and Sy-Yen Kuo. 2010. iCon: utilizing everyday objects as additional, auxiliary and instant tabletop controllers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1155--1164.
[15]
Pei-Yu Chi, Daniel Vogel, Mira Dontcheva, Wilmot Li, and Björn Hartmann. 2016. Authoring illustrations of human movements by iterative physical demonstration. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 809--820.
[16]
Jeannette Shiaw-Yuan Chin, Victor Callaghan, and Graham Clarke. 2006. An End-User Programming Paradigm for Pervasive Computing Applications. In ICPS, Vol. 6. 325--328.
[17]
Jose Danado and Fabio Paternò. 2014. Puzzle: A mobile application development environment using a jigsaw metaphor. Journal of Visual Languages & Computing 25, 4 (2014), 297--315.
[18]
Luigi De Russis and Fulvio Corno. 2015. Homerules: A tangible end-user programming interface for smart homes. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. 2109--2114.
[19]
Anind K Dey, Gregory D Abowd, and others. 2000. The context toolkit: Aiding the development of context-aware applications. In Workshop on Software Engineering for wearable and pervasive computing. 431--441.
[20]
Anind K Dey, Raffay Hamid, Chris Beckmann, Ian Li, and Daniel Hsu. 2004. a CAPpella: programming by demonstration of context-aware applications. In Proceedings of the SIGCHI conference on Human factors in computing systems. 33--40.
[21]
Anind K Dey, Timothy Sohn, Sara Streng, and Justin Kodama. 2006. iCAP: Interactive prototyping of context-aware applications. In International Conference on Pervasive Computing. Springer, 254--271.
[22]
Barrett Ens, Fraser Anderson, Tovi Grossman, Michelle Annett, Pourang Irani, and George Fitzmaurice. 2017. Ivy: Exploring spatially situated visual programming for authoring and understanding intelligent environments. In Proceedings of the 43rd Graphics Interface Conference. 156--162.
[23]
Alireza Fathi and James M Rehg. 2013. Modeling actions through state changes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2579--2586.
[24]
Basura Fernando, Sareh Shirazi, and Stephen Gould. 2017. Unsupervised human action detection by action matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 1--9.
[25]
FinalIK. 2019. FinalIK. (2019). Retrieved September 1, 2019 from https://assetstore.unity.com/packages/tools/animation/final-ik-14290.
[26]
Google. 2020. Tensorflow. (2020). Retrieved April 20, 2020 from https://www.tensorflow.org/.
[27]
Ankit Gupta, Maneesh Agrawala, Brian Curless, and Michael Cohen. 2014. Motionmontage: A system to annotate and combine motion takes for 3d animations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2017--2026.
[28]
Ankit Gupta, Dieter Fox, Brian Curless, and Michael Cohen. 2012. DuploTrack: a real-time system for authoring and guiding duplo block assembly. In Proceedings of the 25th annual ACM symposium on User interface software and technology. 389--402.
[29]
Björn Hartmann, Leith Abdulla, Manas Mittal, and Scott R Klemmer. 2007. Authoring sensor-based interactions by demonstration with direct manipulation and pattern recognition. In Proceedings of the SIGCHI conference on Human factors in computing systems. 145--154.
[30]
Robert Held, Ankit Gupta, Brian Curless, and Maneesh Agrawala. 2012. 3D puppetry: a kinect-based interface for 3D animation. In UIST. Citeseer, 423--434.
[31]
Valentin Heun, James Hobin, and Pattie Maes. 2013. Reality editor: programming smarter objects. In Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication. 307--310.
[32]
Jan Humble, Andy Crabtree, Terry Hemmings, Karl-Petter Åkesson, Boriana Koleva, Tom Rodden, and P"ar Hansson. 2003. 'Playing with the Bits' User-configuration of Ubiquitous Domestic Environments. In International Conference on Ubiquitous Computing. Springer, 256--263.
[33]
Ke Huo, Yuanzhi Cao, Sang Ho Yoon, Zhuangying Xu, Guiming Chen, and Karthik Ramani. 2018. Scenariot: spatially mapping smart things within augmented reality scenes. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1--13.
[34]
IFTTT. 2020. IFTTT. (2020). Retrieved April 1, 2020 from https://ifttt.com.
[35]
Andrey Ignatov. 2018. Real-time human activity recognition from accelerometer data using Convolutional Neural Networks. Applied Soft Computing 62 (2018), 915--922.
[36]
CN Joseph, S Kokulakumaran, K Srijeyanthan, A Thusyanthan, C Gunasekara, and CD Gamage. 2010. A framework for whole-body gesture recognition from video feeds. In 2010 5th International Conference on Industrial and Information Systems. IEEE, 430--435.
[37]
Georgios Kapidis, Ronald Poppe, Elsbeth van Dam, Lucas PJJ Noldus, and Remco C Veltkamp. 2019. Egocentric Hand Track and Object-based Human Action Recognition. arXiv preprint arXiv:1905.00742 (2019).
[38]
Ju-Whan Kim, Han-Jong Kim, and Tek-Jin Nam. 2016. M. gesture: an acceleration-based gesture authoring system on multiple handheld and wearable devices. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2307--2318.
[39]
Yongkwan Kim and Seok-Hyung Bae. 2016. SketchingWithHands: 3D sketching handheld products with first-person hand posture. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 797--808.
[40]
Michael Laielli, James Smith, Giscard Biamby, Trevor Darrell, and Bjoern Hartmann. 2019. LabelAR: A Spatial Guidance Interface for Fast Computer Vision Image Collection. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 987--998.
[41]
Gierad Laput, Eric Brockmeyer, Scott E Hudson, and Chris Harrison. 2015. Acoustruments: Passive, acoustically-driven, interactive controls for handheld devices. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 2161--2170.
[42]
Gierad Laput, Robert Xiao, and Chris Harrison. 2016. Viband: High-fidelity bio-acoustic sensing using commodity smartwatch accelerometers. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 321--333.
[43]
Bokyung Lee, Minjoo Cho, Joonhee Min, and Daniel Saakes. 2016. Posing and acting as input for personalizing furniture. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction. 1--10.
[44]
Jisoo Lee, Luis Gardu no, Erin Walker, and Winslow Burleson. 2013. A tangible programming tool for creation of context-aware applications. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. 391--400.
[45]
Hanchuan Li, Can Ye, and Alanson P Sample. 2015. IDSense: A human object interaction detection system based on passive UHF RFID. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 2555--2564.
[46]
Toby Jia-Jun Li, Yuanchun Li, Fanglin Chen, and Brad A Myers. 2017. Programming IoT devices by demonstration using mobile apps. In International Symposium on End User Development. Springer, 3--17.
[47]
Yanghao Li, Cuiling Lan, Junliang Xing, Wenjun Zeng, Chunfeng Yuan, and Jiaying Liu. 2016. Online human action detection using joint classification-regression recurrent neural networks. In European Conference on Computer Vision. Springer, 203--220.
[48]
David Lindlbauer and Andy D Wilson. 2018. Remixed reality: manipulating space and time in augmented reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1--13.
[49]
Yang Liu, Ping Wei, and Song-Chun Zhu. 2017. Jointly recognizing object fluents and tasks in egocentric videos. In Proceedings of the IEEE International Conference on Computer Vision. 2924--2932.
[50]
Hao Lü and Yang Li. 2012. Gesture coder: a tool for programming multi-touch gestures by demonstration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2875--2884.
[51]
Hao Lü and Yang Li. 2013. Gesture studio: authoring multi-touch interactions through demonstration and declaration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 257--266.
[52]
Ana I Maqueda, Carlos R del Blanco, Fernando Jaureguizar, and Narciso Garc'ia. 2015. Human-action recognition module for the new generation of augmented reality applications. In 2015 International Symposium on Consumer Electronics (ISCE). IEEE, 1--2.
[53]
Panos Markopoulos. 2016. Ambient intelligence: vision, research, and life. Journal of Ambient Intelligence and Smart Environments 8, 5 (2016), 491--499.
[54]
Michael Nebeling, Katy Lewis, Yu-Cheng Chang, Lihan Zhu, Michelle Chung, Piaoyang Wang, and Janet Nebeling. 2020. XRDirector: A Role-Based Collaborative Immersive Authoring System. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--12.
[55]
Makoto Ono, Buntarou Shizuki, and Jiro Tanaka. 2013. Touch & activate: adding interactivity to existing objects using active acoustic sensing. In Proceedings of the 26th annual ACM symposium on User interface software and technology. 31--40.
[56]
OpenCV. 2020. Fisheye camera model. (2020). Retrieved April 20, 2020 from https://docs.opencv.org/3.4/db/d58/group__calib3d__fisheye.html.
[57]
Manoranjan Paul, Shah ME Haque, and Subrata Chakraborty. 2013. Human detection in surveillance videos and its applications-a review. EURASIP Journal on Advances in Signal Processing 2013, 1 (2013), 176.
[58]
Isabel Pedersen. 2009. Radiating centers: augmented reality and human-centric design. In 2009 IEEE International Symposium on Mixed and Augmented Reality-Arts, Media and Humanities. IEEE, 11--16.
[59]
Charith Perera, Arkady Zaslavsky, Peter Christen, and Dimitrios Georgakopoulos. 2013. Context aware computing for the internet of things: A survey. IEEE communications surveys & tutorials 16, 1 (2013), 414--454.
[60]
Matthai Philipose. 2005. Large-scale human activity recognition using ultra-dense sensing. The Bridge, National Academy of Engineering 35, 4 (2005).
[61]
Hamed Pirsiavash and Deva Ramanan. 2012. Detecting activities of daily living in first-person camera views. In 2012 IEEE conference on computer vision and pattern recognition. IEEE, 2847--2854.
[62]
David Porfirio, Evan Fisher, Allison Sauppé, Aws Albarghouthi, and Bilge Mutlu. 2019. Bodystorming Human-Robot Interactions. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 479--491.
[63]
Sasank Reddy, Min Mun, Jeff Burke, Deborah Estrin, Mark Hansen, and Mani Srivastava. 2010. Using mobile phones to determine transportation modes. ACM Transactions on Sensor Networks (TOSN) 6, 2 (2010), 1--27.
[64]
Joseph Redmon and Ali Farhadi. 2018. YOLOv3: An Incremental Improvement. arXiv (2018).
[65]
Michael Rietzler, Julia Greim, Marcel Walch, Florian Schaub, Björn Wiedersheim, and Michael Weber. 2013. homeBLOX: introducing process-driven home automation. In Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication. 801--808.
[66]
Nazmus Saquib, Rubaiat Habib Kazi, Li-Yi Wei, and Wilmot Li. 2019. Interactive body-driven graphics for augmented video performance. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--12.
[67]
Munehiko Sato, Ivan Poupyrev, and Chris Harrison. 2012. Touché: enhancing touch interaction on humans, screens, liquids, and everyday objects. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 483--492.
[68]
Lei Shi, Maryam Ashoori, Yunfeng Zhang, and Shiri Azenkot. 2018. Knock knock, what's there: converting passive objects into customizable smart controllers. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services. 1--13.
[69]
Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. 2019. Skeleton-based action recognition with directed graph neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7912--7921.
[70]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[71]
Enox Software. 2020. OpenCv For Unity. (2020). Retrieved April 20, 2020 from https://enoxsoftware.com/opencvforunity/.
[72]
Stereolabs. 2019. ZED Mini Stereo Camera - Stereolabs. (2019). Retrieved September 1, 2019 from https://www.stereolabs.com/zed-mini/.
[73]
Yu-Chuan Su and Kristen Grauman. 2016. Detecting engagement in egocentric video. In European Conference on Computer Vision. Springer, 454--471.
[74]
Denis Tome, Patrick Peluse, Lourdes Agapito, and Hernan Badino. 2019. xR-EgoPose: Egocentric 3D Human Pose from an HMD Camera. In Proceedings of the IEEE International Conference on Computer Vision. 7728--7738.
[75]
Khai N Truong, Elaine M Huang, and Gregory D Abowd. 2004. CAMP: A magnetic poetry interface for end-user programming of capture applications for the home. In International Conference on Ubiquitous Computing. Springer, 143--160.
[76]
Blase Ur, Elyse McManus, Melwyn Pak Yong Ho, and Michael L Littman. 2014. Practical trigger-action programming in the smart home. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 803--812.
[77]
Mark Weiser. 1993. Hot topics-ubiquitous computing. Computer 26, 10 (1993), 71--72.
[78]
Haijun Xia, Bruno Araujo, Tovi Grossman, and Daniel Wigdor. 2016. Object-oriented drawing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 4610--4621.
[79]
Weipeng Xu, Avishek Chatterjee, Michael Zollhoefer, Helge Rhodin, Pascal Fua, Hans-Peter Seidel, and Christian Theobalt. 2019. Mo 2 Cap 2: Real-time Mobile 3D Motion Capture with a Cap-mounted Fisheye Camera. IEEE transactions on visualization and computer graphics 25, 5 (2019), 2093--2101.
[80]
Jie Yin, Qiang Yang, and Jeffrey Junfeng Pan. 2008. Sensor-based abnormal human-activity detection. IEEE Transactions on Knowledge and Data Engineering 20, 8 (2008), 1082--1090.
[81]
Hong-Bo Zhang, Yi-Xiang Zhang, Bineng Zhong, Qing Lei, Lijie Yang, Ji-Xiang Du, and Duan-Sheng Chen. 2019. A comprehensive survey of vision-based human action recognition methods. Sensors 19, 5 (2019), 1005.
[82]
Yupeng Zhang, Teng Han, Zhimin Ren, Nobuyuki Umetani, Xin Tong, Yang Liu, Takaaki Shiratori, and Xiang Cao. 2013. BodyAvatar: creating freeform 3D avatars using first-person body gestures. In Proceedings of the 26th annual ACM symposium on User interface software and technology. 387--396.
[83]
Yang Zhang, Yasha Iravantchi, Haojian Jin, Swarun Kumar, and Chris Harrison. 2019. Sozu: Self-Powered Radio Tags for Building-Scale Activity Sensing. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 973--985.
[84]
Yang Zhang, Gierad Laput, and Chris Harrison. 2017. Electrick: Low-cost touch sensing using electric field tomography. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 1--14.
[85]
Yang Zhang, Gierad Laput, and Chris Harrison. 2018. Vibrosight: Long-Range Vibrometry for Smart Environment Sensing. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 225--236.
[86]
Yue Zhao, Yuanjun Xiong, Limin Wang, Zhirong Wu, Xiaoou Tang, and Dahua Lin. 2017. Temporal action detection with structured segment networks. In Proceedings of the IEEE International Conference on Computer Vision. 2914--2923.

Cited By

View all
  • (2024)Transforming Procedural Instructions into In-Situ Augmented Reality Guides with InstructARAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686321(1-3)Online publication date: 13-Oct-2024
  • (2024)exHARProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435008:1(1-30)Online publication date: 6-Mar-2024
  • (2024)Jigsaw: Authoring Immersive Storytelling Experiences with Augmented Reality and Internet of ThingsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642744(1-14)Online publication date: 11-May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST '20: Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology
October 2020
1297 pages
ISBN:9781450375146
DOI:10.1145/3379337
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 October 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. augmented reality
  2. context-aware application
  3. embodied authoring
  4. end-user programming tool
  5. in-situ authoring
  6. ubiquitous computing

Qualifiers

  • Research-article

Funding Sources

  • National Science Fundation

Conference

UIST '20

Acceptance Rates

Overall Acceptance Rate 561 of 2,567 submissions, 22%

Upcoming Conference

UIST '25
The 38th Annual ACM Symposium on User Interface Software and Technology
September 28 - October 1, 2025
Busan , Republic of Korea

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)313
  • Downloads (Last 6 weeks)27
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Transforming Procedural Instructions into In-Situ Augmented Reality Guides with InstructARAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686321(1-3)Online publication date: 13-Oct-2024
  • (2024)exHARProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435008:1(1-30)Online publication date: 6-Mar-2024
  • (2024)Jigsaw: Authoring Immersive Storytelling Experiences with Augmented Reality and Internet of ThingsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642744(1-14)Online publication date: 11-May-2024
  • (2024)ProInterAR: A Visual Programming Platform for Creating Immersive AR InteractionsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642527(1-15)Online publication date: 11-May-2024
  • (2024)MineXR: Mining Personalized Extended Reality InterfacesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642394(1-17)Online publication date: 11-May-2024
  • (2024)Fast-Forward Reality: Authoring Error-Free Context-Aware Policies with Real-Time Unit Tests in Extended RealityProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642158(1-17)Online publication date: 11-May-2024
  • (2024)RoboVisAR: Immersive Authoring of Condition-based AR Robot VisualisationsProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634972(462-471)Online publication date: 11-Mar-2024
  • (2024)Emot Act AR: Tailoring Content Through User Emotion and Activity Analysis2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00232(871-872)Online publication date: 16-Mar-2024
  • (2024)Development of Augmented Reality Game Using Computer Vision Technology2024 IEEE 4th International Conference on Smart Information Systems and Technologies (SIST)10.1109/SIST61555.2024.10629276(386-391)Online publication date: 15-May-2024
  • (2024)Towards Reconfigurable Cyber-Physical-Human Systems: Leveraging Mixed Reality and Digital Twins to integrate Human OperationsProcedia CIRP10.1016/j.procir.2024.10.124130(524-531)Online publication date: 2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media