skip to main content
research-article

Recognizing Unintentional Touch on Interactive Tabletop

Published:18 March 2020Publication History
Skip Abstract Section

Abstract

A multi-touch interactive tabletop is designed to embody the benefits of a digital computer within the familiar surface of a physical tabletop. However, the nature of current multi-touch tabletops to detect and react to all forms of touch, including unintentional touches, impedes users from acting naturally on them. In our research, we leverage gaze direction, head orientation and screen contact data to identify and filter out unintentional touches, so that users can take full advantage of the physical properties of an interactive tabletop, e.g., resting hands or leaning on the tabletop during the interaction. To achieve this, we first conducted a user study to identify behavioral pattern differences (gaze, head and touch) between completing usual tasks on digital versus physical tabletops. We then compiled our findings into five types of spatiotemporal features, and train a machine learning model to recognize unintentional touches with an F1 score of 91.3%, outperforming the state-of-the-art model by 4.3%. Finally we evaluated our algorithm in a real-time filtering system. A user study shows that our algorithm is stable and the improved tabletop effectively screens out unintentional touches, and provide more relaxing and natural user experience. By linking their gaze and head behavior to their touch behavior, our work sheds light on the possibility of future tabletop technology to improve the understanding of users' input intention.

Skip Supplemental Material Section

Supplemental Material

References

  1. Michelle Annett, Fraser Anderson, Walter F. Bischof, and Anoop Gupta. 2014. The Pen is Mightier: Understanding Stylus Behaviour While Inking on Tablets. In Proceedings of Graphics Interface 2014 (GI '14). Canadian Information Processing Society, Toronto, Ont., Canada, Canada, 193--200. http://dl.acm.org/citation.cfm?id=2619648.2619680Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Michelle Annett, Anoop Gupta, and Walter F. Bischof. 2014. Exploring and Understanding Unintended Touch During Direct Pen Interaction. ACM Trans. Comput.-Hum. Interact. 21, 5, Article 28 (Nov. 2014), 39 pages. https://doi.org/10.1145/2674915Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Florian Block, James Hammerman, Michael Horn, Amy Spiegel, Jonathan Christiansen, Brenda Phillips, Judy Diamond, E. Margaret Evans, and Chia Shen. 2015. Fluid Grouping: Quantifying Group Engagement Around Interactive Tabletop Exhibits in the Wild. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 867--876. https://doi.org/10.1145/2702123.2702231Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Christophe Bortolaso, Matthew Oskamp, Greg Phillips, Carl Gutwin, and T.C. Nicholas Graham. 2014. The Effect of View Techniques on Collaboration and Awareness in Tabletop Map-Based Tasks. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS '14). ACM, New York, NY, USA, 79--88. https://doi.org/10.1145/2669485.2669504Google ScholarGoogle Scholar
  5. Mark Brown, Winyu Chinthammit, and Paddy Nixon. 2014. A Comparison of User Preferences for Tangible Objects vs Touch Buttons with a Map-based Tabletop Application. In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design (OzCHI '14). ACM, New York, NY, USA, 212--215. https://doi.org/10.1145/2686612.2686645Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Y.-L. Betty Chang, Stacey D. Scott, and Mark Hancock. 2014. Supporting Situation Awareness in Collaborative Tabletop Systems with Automation. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS '14). ACM, New York, NY, USA, 185--194. https://doi.org/10.1145/2669485.2669496Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Manoranjan Dash and Huan Liu. 1997. Feature selection for classification. Intelligent data analysis 1, 1--4 (1997), 131--156.Google ScholarGoogle Scholar
  8. Fred D Davis. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly (1989), 319--340.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Tulio de Souza Alcantara, Jennifer Ferreira, and Frank Maurer. 2013. Interactive Prototyping of Tabletop and Surface Applications. In Proceedings of the 5th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS '13). ACM, New York, NY, USA, 229--238. https://doi.org/10.1145/2494603.2480313Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Abigail C. Evans, Katie Davis, James Fogarty, and Jacob O. Wobbrock. 2017. Group Touch: Distinguishing Tabletop Users in Group Settings via Statistical Modeling of Touch Pairs. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 35--47. https://doi.org/10.1145/3025453.3025793Google ScholarGoogle Scholar
  11. Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics (2001), 1189--1232.Google ScholarGoogle Scholar
  12. Matt W Gardner and SR Dorling. 1998. Artificial neural networks (the multilayer perceptron)---a review of applications in the atmospheric sciences. Atmospheric environment 32, 14 (1998), 2627--2636.Google ScholarGoogle Scholar
  13. Jens Gerken, Hans-Christian Jetter, Toni Schmidt, and Harald Reiterer. 2010. Can "Touch" Get Annoying?. In ACM International Conference on Interactive Tabletops and Surfaces (ITS '10). ACM, New York, NY, USA, 257--258. https://doi.org/10.1145/1936652.1936704Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Jason Tyler Griffin. 2013. Touch screen palm input rejection. US Patent App. 13/469,354.Google ScholarGoogle Scholar
  15. Jefferson Y Han. 2005. Low-cost multi-touch sensing through frustrated total internal reflection. In Proceedings of the 18th annual ACM symposium on User interface software and technology. ACM, 115--118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in psychology 52 (1988), 139--183.Google ScholarGoogle Scholar
  17. John M Henderson. 2003. Human gaze control during real-world scene perception. Trends in cognitive sciences 7, 11 (2003), 498--504.Google ScholarGoogle Scholar
  18. Mark A Hollands, Aftab E Patla, and Joan N Vickers. 2002. "Look where you're going!": gaze behaviour associated with maintaining and changing the direction of locomotion. Experimental brain research 143, 2 (2002), 221--230.Google ScholarGoogle Scholar
  19. Jeff Huang, Ryen White, and Georg Buscher. 2012. User See, User Point: Gaze and Cursor Alignment in Web Search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 1341--1350. https://doi.org/10.1145/2207676.2208591Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Robert J. K. Jacob. 1991. The Use of Eye Movements in Human-computer Interaction Techniques: What You Look at is What You Get. ACM Trans. Inf. Syst. 9, 2 (April 1991), 152--169. https://doi.org/10.1145/123078.128728Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. In Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14 Adjunct). ACM, New York, NY, USA, 1151--1160. https://doi.org/10.1145/2638728.2641695Google ScholarGoogle Scholar
  22. Adam Kendon. 1967. Some functions of gaze-direction in social interaction. Acta psychologica 26 (1967), 22--63.Google ScholarGoogle Scholar
  23. Ahmed Kharrufa, Madeline Balaam, Phil Heslop, David Leat, Paul Dolan, and Patrick Olivier. 2013. Tables in the Wild: Lessons Learned from a Large-scale Multi-tabletop Deployment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 1021--1030. https://doi.org/10.1145/2470654.2466130Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Seiya Koura, Shunsuke Suo, Asako Kimura, Fumihisa Shibata, and Hideyuki Tamura. 2012. Amazing Forearm As an Innovative Interaction Device and Data Storage on Tabletop Display. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces (ITS '12). ACM, New York, NY, USA, 383--386. https://doi.org/10.1145/2396636.2396706Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Yoshinori Kuno, Tomoyuki Ishiyama, Satoru Nakanishi, and Yoshiaki Shirai. 1999. Combining Observations of Intentional and Unintentional Behaviors for Human-computer Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA, 238--245. https://doi.org/10.1145/302979.303051Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Andreas Kunz, Ali Alavi, Jonas Landgren, Asim Evren Yantaç, PawełWoźniak, Zoltán Sárosi, and Morten Fjeld. 2013. Tangible Tabletops for Emergency Response: An Exploratory Study. In Proceedings of the International Conference on Multimedia, Interaction, Design and Innovation (MIDI '13). ACM, New York, NY, USA, Article 10, 8 pages. https://doi.org/10.1145/2500342.2500352Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Ricardo Langner, John Brosz, Raimund Dachselt, and Sheelagh Carpendale. 2010. PhysicsBox: Playful Educational Tabletop Games. In ACM International Conference on Interactive Tabletops and Surfaces (ITS '10). ACM, New York, NY, USA, 273--274. https://doi.org/10.1145/1936652.1936712Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Khanh-Duy Le, Mahsa Paknezhad, Paweł W. Woźniak, Maryam Azh, Gabrielė Kasparavičiūtė, Morten Fjeld, Shengdong Zhao, and Michael S. Brown. 2016. Towards Leaning Aware Interaction with Multitouch Tabletops. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI '16). ACM, New York, NY, USA, Article 4, 4 pages. https://doi.org/10.1145/2971485.2971553Google ScholarGoogle Scholar
  29. Wen-Hung Liao. 2009. A Framework for Attention-based Personal Photo Manager. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics (SMC '09). IEEE Press, Piscataway, NJ, USA, 2128--2132. http://dl.acm.org/citation.cfm?id=1732003.1732067Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Daniel J. Liebling and Susan T. Dumais. 2014. Gaze and Mouse Coordination in Everyday Work. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (UbiComp '14 Adjunct). ACM, New York, NY, USA, 1141--1150. https://doi.org/10.1145/2638728.2641692Google ScholarGoogle Scholar
  31. Roman Lissermann, Jochen Huber, Martin Schmitz, Jürgen Steimle, and Max Mühlhäuser. 2014. Permulin: Mixed-focus Collaboration on Multi-view Tabletops. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 3191--3200. https://doi.org/10.1145/2556288.2557405Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Hao Lu and Yang Li. 2015. Gesture On: Enabling Always-On Touch Gestures for Fast Mobile Access from the Device Standby Mode. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 3355--3364. https://doi.org/10.1145/2702123.2702610Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Paul P. Maglio, Teenie Matlock, Christopher S. Campbell, Shumin Zhai, and Barton A. Smith. 2000. Gaze and Speech in Attentive User Interfaces. In Proceedings of the Third International Conference on Advances in Multimodal Interfaces (ICMI '00). Springer-Verlag, London, UK, UK, 1--7. http://dl.acm.org/citation.cfm?id=645524.656806Google ScholarGoogle Scholar
  34. Alexander Mariakakis, Mayank Goel, Md Tanvir Islam Aumi, Shwetak N. Patel, and Jacob O. Wobbrock. 2015. SwitchBack: Using Focus and Saccade Tracking to Guide Users' Attention for Mobile Task Resumption. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 2953--2962. https://doi.org/10.1145/2702123.2702539Google ScholarGoogle Scholar
  35. Paul Marshall, Richard Morris, Yvonne Rogers, Stefan Kreitmayer, and Matt Davies. 2011. Rethinking 'Multi-user': An In-the-wild Study of How Groups Approach a Walk-up-and-use Tabletop Interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 3033--3042. https://doi.org/10.1145/1978942.1979392Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Juha Matero and Ashley Colley. 2012. Identifying Unintentional Touches on Handheld Touch Screen Devices. In Proceedings of the Designing Interactive Systems Conference (DIS '12). ACM, New York, NY, USA, 506--509. https://doi.org/10.1145/2317956.2318031Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Fabrice Matulic and Moira Norrie. 2012. Empirical Evaluation of Uni- and Bimodal Pen and Touch Interaction Properties on Digital Tabletops. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces (ITS '12). ACM, New York, NY, USA, 143--152. https://doi.org/10.1145/2396636.2396659Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Fabrice Matulic, Daniel Vogel, and Raimund Dachselt. 2017. Hand Contact Shape Recognition for Posture-Based Tabletop Widgets and Interaction. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces (ISS '17). ACM, New York, NY, USA, 3--11. https://doi.org/10.1145/3132272.3134126Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Michael Mauderer and Florian Daiber. 2013. Combining Touch and Gaze for Distant Selection in a Tabletop Setting. (2013).Google ScholarGoogle Scholar
  40. Pranav Mistry, Pattie Maes, and Liyan Chang. 2009. WUW - Wear Ur World: A Wearable Gestural Interface. In CHI '09 Extended Abstracts on Human Factors in Computing Systems (CHI EA '09). ACM, New York, NY, USA, 4111--4116. https://doi.org/10.1145/1520340.1520626Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Erik Murphy-Chutorian and Mohan Manubhai Trivedi. 2009. Head pose estimation in computer vision: A survey. IEEE transactions on pattern analysis and machine intelligence 31, 4 (2009), 607--626.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Jakob Nielsen. 1994. Usability inspection methods. In Conference companion on Human factors in computing systems. ACM, 413--414.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gaze-touch: Combining Gaze with Multi-touch for Interaction on the Same Surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 509--518. https://doi.org/10.1145/2642918.2647397Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, and Hans Gellersen. 2015. Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15). ACM, New York, NY, USA, 373--383. https://doi.org/10.1145/2807442.2807460Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Ken Pfeuffer and Hans Gellersen. 2016. Gaze and Touch Interaction on Tablets. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 301--311. https://doi.org/10.1145/2984511.2984514Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Natural Point. 2011. Optitrack. Natural Point, Inc.,[Online]. Available: http://www. naturalpoint. com/optitrack/.[Accessed 22 2 2014] (2011).Google ScholarGoogle Scholar
  47. Argenis Ramirez Gomez and Hans Gellersen. 2019. SuperVision: Playing with Gaze Aversion and Peripheral Vision. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 473, 12 pages. https://doi.org/10.1145/3290605.3300703Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Neil Robertson and Ian Reid. 2006. Estimating gaze direction from low-resolution faces in video. Computer Vision-ECCV 2006 (2006), 402--415.Google ScholarGoogle Scholar
  49. Hasibullah Sahibzada, Eva Hornecker, Florian Echtler, and Patrick Tobias Fischer. 2017. Designing Interactive Advertisements for Public Displays. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 1518--1529. https://doi.org/10.1145/3025453.3025531Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Johannes Schöning, Peter Brandl, Florian Daiber, Florian Echtler, Otmar Hilliges, Jonathan Hook, Markus Löchtefeld, Nima Motamedi, Laurence Muller, Patrick Olivier, et al. 2008. Multi-touch surfaces: A technical guide. IEEE Tabletops and Interactive Surfaces 2, 11 (2008).Google ScholarGoogle Scholar
  51. Julia Schwarz, Charles Claudius Marais, Tommer Leyvand, Scott E. Hudson, and Jennifer Mankoff. 2014. Combining Body Pose, Gaze, and Gesture to Determine Intention to Interact in Vision-based Interfaces. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 3443--3452. https://doi.org/10.1145/2556288.2556989Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Julia Schwarz, Robert Xiao, Jennifer Mankoff, Scott E. Hudson, and Chris Harrison. 2014. Probabilistic Palm Rejection Using Spatiotemporal Touch Features and Iterative Classification. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 2009--2012. https://doi.org/10.1145/2556288.2557056Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Barış Serim and Giulio Jacucci. 2019. Explicating "Implicit Interaction": An Examination of the Concept and Challenges for Research. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 417, 16 pages. https://doi.org/10.1145/3290605.3300647Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Ludwig Sidenmark and Hans Gellersen. 2019. Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection. In Proceedings of the 32Nd Annual ACM Symposium on User Interface Software and Technology (UIST '19). ACM, New York, NY, USA, 1161--1174. https://doi.org/10.1145/3332165.3347921Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Ludwig Sidenmark and Anders Lundström. 2019. Gaze Behaviour on Interacted Objects During Hand Interaction in Virtual Reality for Eye Tracking Calibration. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (ETRA '19). ACM, New York, NY, USA, Article 6, 9 pages. https://doi.org/10.1145/3314111.3319815Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Misha Sra, Xuhai Xu, Aske Mottelson, and Pattie Maes. 2018. VMotion: Designing a Seamless Walking Experience in VR. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS '18). ACM, New York, NY, USA, 59--70. https://doi.org/10.1145/3196709.3196792Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Dave M Stampe. 1993. Heuristic filtering and reliable calibration methods for video-based pupil-tracking systems. Behavior Research Methods, Instruments, & Computers 25, 2 (1993), 137--142.Google ScholarGoogle ScholarCross RefCross Ref
  58. Sophie Stellmach and Raimund Dachselt. 2013. Still Looking: Investigating Seamless Gaze-supported Selection, Positioning, and Manipulation of Distant Targets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 285--294. https://doi.org/10.1145/2470654.2470695Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Rainer Stiefelhagen, Michael Finke, Jie Yang, and Alex Waibel. 1999. From gaze to focus of attention. In Visual Information and Information Systems. Springer, 765--772.Google ScholarGoogle Scholar
  60. Lucia Terrenghi, David Kirk, Abigail Sellen, and Shahram Izadi. 2007. Affordances for Manipulation of Physical Versus Digital Media on Interactive Surfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '07). ACM, New York, NY, USA, 1157--1166. https://doi.org/10.1145/1240624.1240799Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Reed L Townsend, Alexander J Kolmykov-Zotov, Steven P Dodge, and Bryan D Scott. 2011. Unintentional touch rejection. US Patent 8,018,440.Google ScholarGoogle Scholar
  62. Jayson Turner, Jason Alexander, Andreas Bulling, and Hans Gellersen. 2015. Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-Scale-Translate Tasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 4179--4188. https://doi.org/10.1145/2702123.2702355Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Jayson Turner, Andreas Bulling, and Hans Gellersen. 2011. Combining Gaze with Manual Interaction to Extend Physical Reach. In Proceedings of the 1st International Workshop on Pervasive Eye Tracking & Mobile Eye-based Interaction (PETMEI '11). ACM, New York, NY, USA, 33--36. https://doi.org/10.1145/2029956.2029966Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Simon Voelker, Andrii Matviienko, Johannes Schöning, and Jan Borchers. 2015. Combining Direct and Indirect Touch Input for Interactive Workspaces Using Gaze Input. In Proceedings of the 3rd ACM Symposium on Spatial User Interaction (SUI '15). ACM, New York, NY, USA, 79--88. https://doi.org/10.1145/2788940.2788949Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Daniel Vogel and Ravin Balakrishnan. 2010. Occlusion-aware Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 263--272. https://doi.org/10.1145/1753326.1753365Google ScholarGoogle Scholar
  66. Xuhai Xu, Ahmed Hassan Awadallah, Susan T. Dumais, Farheen Omar, Bogdan Popp, Robert Routhwaite, and Farnaz Jahanbakhsh. 2020. Understanding User Behavior For Document Recommendation. In The World Wide Web Conference (WWW '20). Association for Computing Machinery, New York, NY, USA, 7. https://doi.org/10.1145/3366423.3380071Google ScholarGoogle Scholar
  67. Xuhai Xu, Prerna Chikersal, Afsaneh Doryab, Daniella K. Villalba, Janine M. Dutcher, Michael J. Tumminia, Tim Althoff, Sheldon Cohen, Kasey G. Creswell, J. David Creswell, and et al. 2019. Leveraging Routine Behavior and Contextually-Filtered Features for Depression Detection among College Students. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 3, Article Article 116 (Sept. 2019), 33 pages. https://doi.org/10.1145/3351274Google ScholarGoogle Scholar
  68. Xuhai Xu, Alexandru Dancu, Pattie Maes, and Suranga Nanayakkara. 2018. Hand Range Interface: Information Always at Hand with a Body-centric Mid-air Input Surface. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '18). ACM, New York, NY, USA, Article 5, 12 pages. https://doi.org/10.1145/3229434.3229449Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Xuhai Xu, Haitian Shi, Xin Yi, Wenjia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, and Anind K. Dey. 2020. EarBuddy: Enabling On-Face Interaction via Wireless Earbuds. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 14. https://doi.org/10.1145/3313831.3376836Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Xuhai Xu, Chun Yu, Anind K. Dey, and Jennifer Mankoff. 2019. Clench Interface: Novel Biting Input Techniques. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 275, 12 pages. https://doi.org/10.1145/3290605.3300505Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Yukang Yan, Yingtian Shi, Chun Yu, and Yuanchun Shi. 2020. HeadCross: Exploring Head-Based Crossing Selection on Head-Mounted Displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 1 (March 2020), 22. https://doi.org/10.1145/3380983Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Yukang Yan, Chun Yu, Xiaojuan Ma, Xin Yi, Ke Sun, and Yuanchun Shi. 2018. VirtualGrasp: Leveraging Experience of Interacting with Physical Objects to Facilitate Digital Object Retrieval. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 78, 13 pages. https://doi.org/10.1145/3173574.3173652Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Yukang Yan, Chun Yu, Wengrui Zheng, Ruining Tang, Xuhai Xu, and Yuanchun Shi. 2020. FrownOnError: Interrupting Responses from Smart Speakers by Facial Expressions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 14. https://doi.org/10.1145/3313831.3376810Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Alfred L Yarbus. 1967. Eye movements during perception of complex objects. Springer.Google ScholarGoogle Scholar
  75. Yaning Luo Zejiang Liu. 2012. Nanometer touch film production method.Google ScholarGoogle Scholar
  76. Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and Gaze Input Cascaded (MAGIC) Pointing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA, 246--253. https://doi.org/10.1145/302979.303053Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Yang Zhang, Michel Pahud, Christian Holz, Haijun Xia, Gierad Laput, Michael McGuffin, Xiao Tu, Andrew Mittereder, Fei Su, William Buxton, and Ken Hinckley. 2019. Sensing Posture-Aware Pen+Touch Interaction on Tablets. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 55, 14 pages. https://doi.org/10.1145/3290605.3300285Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Recognizing Unintentional Touch on Interactive Tabletop

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
        Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 4, Issue 1
        March 2020
        1006 pages
        EISSN:2474-9567
        DOI:10.1145/3388993
        Issue’s Table of Contents

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 18 March 2020
        Published in imwut Volume 4, Issue 1

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader