skip to main content
10.1145/3332165.3347933acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

Learning Cooperative Personalized Policies from Gaze Data

Published:17 October 2019Publication History

ABSTRACT

An ideal Mixed Reality (MR) system would only present virtual information (e.g., a label) when it is useful to the person. However, deciding when a label is useful is challenging: it depends on a variety of factors, including the current task, previous knowledge, context, etc. In this paper, we propose a Reinforcement Learning (RL) method to learn when to show or hide an object's label given eye movement data. We demonstrate the capabilities of this approach by showing that an intelligent agent can learn cooperative policies that better support users in a visual search task than manually designed heuristics. Furthermore, we show the applicability of our approach to more realistic environments and use cases (e.g., grocery shopping). By posing MR object labeling as a model-free RL problem, we can learn policies implicitly by observing users' behavior without requiring a visual search model or data annotation.

Skip Supplemental Material Section

Supplemental Material

ufp8079pv.mp4

mp4

22.2 MB

ufp8079vf.mp4

mp4

157.9 MB

p197-gebhardt.mp4

mp4

244 MB

References

  1. Pieter Abbeel, Dmitri Dolgov, Andrew Y Ng, and Sebastian Thrun. 2008. Apprenticeship learning for motion planning with application to parking lot navigation. In IEEE International Conference on Intelligent Robots and Systems 2008. (IROS '08). IEEE, 1083--1090. http://dx.doi.org/10.1109/IROS.2008.4651222Google ScholarGoogle ScholarCross RefCross Ref
  2. Pieter Abbeel and Andrew Y Ng. 2004. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the Twenty-First International Conference on Machine Learning. ACM, 1. http://dx.doi.org/10.1145/1015330.1015430Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Yusuf Aytar, Tobias Pfaff, David Budden, Tom Le Paine, Ziyu Wang, and Nando de Freitas. 2018. Playing hard exploration games by watching YouTube. In Advances in Neural Information Processing Systems (NIPS '18). 2930--2941. https://arxiv.org/abs/1805.11592Google ScholarGoogle Scholar
  4. Ronald Azuma and Chris Furmanski. 2003. Evaluating label placement for augmented reality view management. In Proceedings of the 2nd IEEE/ACM international Symposium on Mixed and Augmented Reality (ISMAR '03). IEEE, 66.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Nikola Banovic, Tofi Buzali, Fanny Chevalier, Jennifer Mankoff, and Anind K Dey. 2016. Modeling and understanding human routine behavior. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, 248--260. http://dx.doi.org/10.1145/2858036.2858557Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Blaine Bell, Steven Feiner, and Tobias Hoellerer. 2001. View Management for Virtual and Augmented Reality. In Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology (UIST '01). 101--110. http://dx.doi.org/10.1145/502348.502363Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Minmin Chen, Alex Beutel, Paul Covington, Sagar Jain, Francois Belletti, and Ed H Chi. 2019. Top-K Off-Policy Correction for a REINFORCE Recommender System. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (WSDM '19). ACM, 456--464. http://dx.doi.org/10.1145/3289600.3290999Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Adam Coates, Pieter Abbeel, and Andrew Y Ng. 2009. Apprenticeship learning for helicopter control. Commun. ACM 52, 7 (2009), 97--105. http://dx.doi.org/10.1145/1538788.1538812Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Ralf Engbert and Reinhold Kliegl. 2003. Microsaccades uncover the orientation of covert attention . Vision Research 43, 9 (2003), 1035--1045. http://dx.doi.org/10.1016/S0042--6989(03)00084--1Google ScholarGoogle ScholarCross RefCross Ref
  10. Milica Gavs ić and Steve Young. 2014. Gaussian processes for POMDP-based dialogue manager optimization . IEEE Transactions on Audio, Speech and Language Processing 22, 1 (2014), 28--40. http://dx.doi.org/10.1109/TASL.2013.2282190Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Raphael Grasset, Tobias Langlotz, Denis Kalkofen, Markus Tatzgern, and Dieter Schmalstieg. 2012. Image-driven view management for augmented reality browsers. In Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR '12). IEEE, 177--186. http://dx.doi.org/10.1109/ISMAR.2012.6402555Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Zehong Hu, Yitao Liang, Jie Zhang, Zhao Li, and Yang Liu. 2018. Inference aided reinforcement learning for incentive mechanism design in crowdsourcing. In Advances in Neural Information Processing Systems (NIPS '18). 5508--5518. https://arxiv.org/abs/1806.00206Google ScholarGoogle Scholar
  13. Simon Julier, Marco Lanzagorta, Yohan Baillot, Lawrence Rosenblum, Steven Feiner, Tobias Hollerer, and Sabrina Sestito. 2000. Information filtering for mobile augmented reality. In Proceedings IEEE and ACM International Symposium on Augmented Reality (ISAR '00). IEEE, 3--11. http://dx.doi.org/10.1109/MCG.2002.1028721Google ScholarGoogle ScholarCross RefCross Ref
  14. Seong Jae Lee and Zoran Popović. 2010. Learning behavior styles with inverse reinforcement learning. In ACM Transactions on Graphics (TOG), Vol. 29. ACM, 122. http://dx.doi.org/10.1145/1778765.1778859Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Yongjoon Lee, Kevin Wampler, Gilbert Bernstein, Jovan Popović , and Zoran Popović. 2010. Motion fields for interactive character locomotion. In ACM Transactions on Graphics (TOG), Vol. 29. ACM, 138. http://dx.doi.org/10.1145/1882261.1866160Google ScholarGoogle Scholar
  16. Alex Leykin and Mihran Tuceryan. 2004. Automatic determination of text readability over textured backgrounds for augmented reality systems. In Third IEEE and ACM International Symposium on Mixed and Augmented Reality. (ISMAR '04). IEEE, 224--230. http://dx.doi.org/10.1109/ISMAR.2004.22Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Elad Liebman, Maytal Saar-Tsechansky, and Peter Stone. 2015. DJ-MC: A Reinforcement-Learning Agent for Music Playlist Recommendation. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS '15). 591--599. https://arxiv.org/abs/1401.1880Google ScholarGoogle Scholar
  18. Feng Liu, Ruiming Tang, Xutao Li, Weinan Zhang, Yunming Ye, Haokun Chen, Huifeng Guo, and Yuzhou Zhang. 2018. Deep reinforcement learning based recommendation with explicit user-item interactions modeling. arXiv preprint arXiv:1810.12027 (2018). https://arxiv.org/abs/1810.12027Google ScholarGoogle Scholar
  19. Wan-Yen Lo and Matthias Zwicker. 2008. Real-time planning for parameterized human motion. In Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA '08). 29--38.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Jacob Boesen Madsen, Markus Tatzqern, Claus B Madsen, Dieter Schmalstieg, and Denis Kalkofen. 2016. Temporal coherence strategies for augmented reality labeling. IEEE transactions on visualization and computer graphics 22, 4 (2016), 1415--1423. http://dx.doi.org/10.1109/TVCG.2016.2518318Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. James McCann and Nancy Pollard. 2007. Responsive characters from motion fragments. In ACM Transactions on Graphics (TOG), Vol. 26. ACM, 6. http://dx.doi.org/10.1145/1276377.1276385Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Andrew Y Ng and Stuart J Russell. 2000. Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML '00). 663--670.Google ScholarGoogle Scholar
  23. Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. 2018a. DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills. ACM Transactions on Graphics 37, 4 (2018). http://dx.doi.org/10.1145/3197517.3201311Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Xue Bin Peng, Angjoo Kanazawa, Jitendra Malik, Pieter Abbeel, and Sergey Levine. 2018b. Sfv: Reinforcement learning of physical skills from videos. ACM Transactions on Graphics 37 (Nov. 2018). http://dx.doi.org/10.1145/3272127.3275014Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. D. Purves, D. Fitzpatrick, L.C. Katz, A.S. Lamantia, J.O. McNamara, S.M. Williams, and G.J. Augustine. 2000. Neuroscience. Sinauer Associates. https://books.google.ch/books?id=F4pTPwAACAAJGoogle ScholarGoogle Scholar
  26. Aravind Rajeswaran, Kendall Lowrey, Emanuel V. Todorov, and Sham M Kakade. 2017. Towards Generalization and Simplicity in Continuous Control. In Advances in Neural Information Processing Systems (NIPS '17). 6550--6561. https://arxiv.org/abs/1703.02660Google ScholarGoogle Scholar
  27. Edward Rosten, Gerhard Reitmayr, and Tom Drummond. 2005. Real-time video annotations for augmented reality. In International Symposium on Visual Computing (ISVC '05). Springer, 294--302. http://dx.doi.org/10.1007/11595755_36Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. 2015. Prioritized experience replay. arXiv preprint arXiv:1511.05952 (2015). https://arxiv.org/abs/1511.05952Google ScholarGoogle Scholar
  29. Pei-Hao Su, Pawel Budzianowski, Stefan Ultes, Milica Gasic, and Steve Young. 2017. Sample-efficient actor-critic reinforcement learning with supervised data for dialogue management. arXiv preprint arXiv:1707.00130 (2017). https://arxiv.org/abs/1707.00130Google ScholarGoogle Scholar
  30. Richard S Sutton and Andrew G Barto. 1998. Introduction to reinforcement learning. Vol. 135. MIT press Cambridge.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Markus Tatzgern, Denis Kalkofen, Raphael Grasset, and Dieter Schmalstieg. 2014. Hedgehog labeling: View management techniques for external labels in 3D space. In 2014 IEEE Virtual Reality (VR). IEEE, 27--32. http://dx.doi.org/10.1109/VR.2014.6802046Google ScholarGoogle Scholar
  32. Markus Tatzgern, Valeria Orso, Denis Kalkofen, Giulio Jacucci, Luciano Gamberini, and Dieter Schmalstieg. 2016. Adaptive information density for augmented reality displays. In 2016 IEEE Virtual Reality (VR). IEEE, 83--92. http://dx.doi.org/10.1109/VR.2016.7504691Google ScholarGoogle Scholar
  33. Adrien Treuille, Yongjoon Lee, and Zoran Popović. 2007. Near-optimal character animation with continuous control. ACM Transactions on Graphics 26, 3 (2007), 7. http://dx.doi.org/10.1145/1276377.1276386Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. J. M. Wolfe. 1994. Guided Search 2. 0 A revised model of visual search . Psychnomic Bulletin & Review 1, 2 (1994), 202--238. http://dx.doi.org/10.3758/BF03200774Google ScholarGoogle ScholarCross RefCross Ref
  35. Jeremy M. Wolfe. 2005. Guidance of visual search by preattentive information . Neurobiology of Attention (2005), 101--104. http://dx.doi.org/10.1016/B978-012375731--9/50021--5Google ScholarGoogle Scholar

Index Terms

  1. Learning Cooperative Personalized Policies from Gaze Data

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        UIST '19: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
        October 2019
        1229 pages
        ISBN:9781450368162
        DOI:10.1145/3332165

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 17 October 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate842of3,967submissions,21%

        Upcoming Conference

        UIST '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader