Abstract
A multi-touch interactive tabletop is designed to embody the benefits of a digital computer within the familiar surface of a physical tabletop. However, the nature of current multi-touch tabletops to detect and react to all forms of touch, including unintentional touches, impedes users from acting naturally on them. In our research, we leverage gaze direction, head orientation and screen contact data to identify and filter out unintentional touches, so that users can take full advantage of the physical properties of an interactive tabletop, e.g., resting hands or leaning on the tabletop during the interaction. To achieve this, we first conducted a user study to identify behavioral pattern differences (gaze, head and touch) between completing usual tasks on digital versus physical tabletops. We then compiled our findings into five types of spatiotemporal features, and train a machine learning model to recognize unintentional touches with an F1 score of 91.3%, outperforming the state-of-the-art model by 4.3%. Finally we evaluated our algorithm in a real-time filtering system. A user study shows that our algorithm is stable and the improved tabletop effectively screens out unintentional touches, and provide more relaxing and natural user experience. By linking their gaze and head behavior to their touch behavior, our work sheds light on the possibility of future tabletop technology to improve the understanding of users' input intention.
Supplemental Material
Available for Download
Supplemental movie, appendix, image and software files for, Recognizing Unintentional Touch on Interactive Tabletop
- Michelle Annett, Fraser Anderson, Walter F. Bischof, and Anoop Gupta. 2014. The Pen is Mightier: Understanding Stylus Behaviour While Inking on Tablets. In Proceedings of Graphics Interface 2014 (GI '14). Canadian Information Processing Society, Toronto, Ont., Canada, Canada, 193--200. http://dl.acm.org/citation.cfm?id=2619648.2619680Google ScholarDigital Library
- Michelle Annett, Anoop Gupta, and Walter F. Bischof. 2014. Exploring and Understanding Unintended Touch During Direct Pen Interaction. ACM Trans. Comput.-Hum. Interact. 21, 5, Article 28 (Nov. 2014), 39 pages. https://doi.org/10.1145/2674915Google ScholarDigital Library
- Florian Block, James Hammerman, Michael Horn, Amy Spiegel, Jonathan Christiansen, Brenda Phillips, Judy Diamond, E. Margaret Evans, and Chia Shen. 2015. Fluid Grouping: Quantifying Group Engagement Around Interactive Tabletop Exhibits in the Wild. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 867--876. https://doi.org/10.1145/2702123.2702231Google ScholarDigital Library
- Christophe Bortolaso, Matthew Oskamp, Greg Phillips, Carl Gutwin, and T.C. Nicholas Graham. 2014. The Effect of View Techniques on Collaboration and Awareness in Tabletop Map-Based Tasks. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS '14). ACM, New York, NY, USA, 79--88. https://doi.org/10.1145/2669485.2669504Google Scholar
- Mark Brown, Winyu Chinthammit, and Paddy Nixon. 2014. A Comparison of User Preferences for Tangible Objects vs Touch Buttons with a Map-based Tabletop Application. In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design (OzCHI '14). ACM, New York, NY, USA, 212--215. https://doi.org/10.1145/2686612.2686645Google ScholarDigital Library
- Y.-L. Betty Chang, Stacey D. Scott, and Mark Hancock. 2014. Supporting Situation Awareness in Collaborative Tabletop Systems with Automation. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS '14). ACM, New York, NY, USA, 185--194. https://doi.org/10.1145/2669485.2669496Google ScholarDigital Library
- Manoranjan Dash and Huan Liu. 1997. Feature selection for classification. Intelligent data analysis 1, 1--4 (1997), 131--156.Google Scholar
- Fred D Davis. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly (1989), 319--340.Google ScholarDigital Library
- Tulio de Souza Alcantara, Jennifer Ferreira, and Frank Maurer. 2013. Interactive Prototyping of Tabletop and Surface Applications. In Proceedings of the 5th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS '13). ACM, New York, NY, USA, 229--238. https://doi.org/10.1145/2494603.2480313Google ScholarDigital Library
- Abigail C. Evans, Katie Davis, James Fogarty, and Jacob O. Wobbrock. 2017. Group Touch: Distinguishing Tabletop Users in Group Settings via Statistical Modeling of Touch Pairs. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 35--47. https://doi.org/10.1145/3025453.3025793Google Scholar
- Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics (2001), 1189--1232.Google Scholar
- Matt W Gardner and SR Dorling. 1998. Artificial neural networks (the multilayer perceptron)---a review of applications in the atmospheric sciences. Atmospheric environment 32, 14 (1998), 2627--2636.Google Scholar
- Jens Gerken, Hans-Christian Jetter, Toni Schmidt, and Harald Reiterer. 2010. Can "Touch" Get Annoying?. In ACM International Conference on Interactive Tabletops and Surfaces (ITS '10). ACM, New York, NY, USA, 257--258. https://doi.org/10.1145/1936652.1936704Google ScholarDigital Library
- Jason Tyler Griffin. 2013. Touch screen palm input rejection. US Patent App. 13/469,354.Google Scholar
- Jefferson Y Han. 2005. Low-cost multi-touch sensing through frustrated total internal reflection. In Proceedings of the 18th annual ACM symposium on User interface software and technology. ACM, 115--118.Google ScholarDigital Library
- Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in psychology 52 (1988), 139--183.Google Scholar
- John M Henderson. 2003. Human gaze control during real-world scene perception. Trends in cognitive sciences 7, 11 (2003), 498--504.Google Scholar
- Mark A Hollands, Aftab E Patla, and Joan N Vickers. 2002. "Look where you're going!": gaze behaviour associated with maintaining and changing the direction of locomotion. Experimental brain research 143, 2 (2002), 221--230.Google Scholar
- Jeff Huang, Ryen White, and Georg Buscher. 2012. User See, User Point: Gaze and Cursor Alignment in Web Search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 1341--1350. https://doi.org/10.1145/2207676.2208591Google ScholarDigital Library
- Robert J. K. Jacob. 1991. The Use of Eye Movements in Human-computer Interaction Techniques: What You Look at is What You Get. ACM Trans. Inf. Syst. 9, 2 (April 1991), 152--169. https://doi.org/10.1145/123078.128728Google ScholarDigital Library
- Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. In Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14 Adjunct). ACM, New York, NY, USA, 1151--1160. https://doi.org/10.1145/2638728.2641695Google Scholar
- Adam Kendon. 1967. Some functions of gaze-direction in social interaction. Acta psychologica 26 (1967), 22--63.Google Scholar
- Ahmed Kharrufa, Madeline Balaam, Phil Heslop, David Leat, Paul Dolan, and Patrick Olivier. 2013. Tables in the Wild: Lessons Learned from a Large-scale Multi-tabletop Deployment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 1021--1030. https://doi.org/10.1145/2470654.2466130Google ScholarDigital Library
- Seiya Koura, Shunsuke Suo, Asako Kimura, Fumihisa Shibata, and Hideyuki Tamura. 2012. Amazing Forearm As an Innovative Interaction Device and Data Storage on Tabletop Display. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces (ITS '12). ACM, New York, NY, USA, 383--386. https://doi.org/10.1145/2396636.2396706Google ScholarDigital Library
- Yoshinori Kuno, Tomoyuki Ishiyama, Satoru Nakanishi, and Yoshiaki Shirai. 1999. Combining Observations of Intentional and Unintentional Behaviors for Human-computer Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA, 238--245. https://doi.org/10.1145/302979.303051Google ScholarDigital Library
- Andreas Kunz, Ali Alavi, Jonas Landgren, Asim Evren Yantaç, PawełWoźniak, Zoltán Sárosi, and Morten Fjeld. 2013. Tangible Tabletops for Emergency Response: An Exploratory Study. In Proceedings of the International Conference on Multimedia, Interaction, Design and Innovation (MIDI '13). ACM, New York, NY, USA, Article 10, 8 pages. https://doi.org/10.1145/2500342.2500352Google ScholarDigital Library
- Ricardo Langner, John Brosz, Raimund Dachselt, and Sheelagh Carpendale. 2010. PhysicsBox: Playful Educational Tabletop Games. In ACM International Conference on Interactive Tabletops and Surfaces (ITS '10). ACM, New York, NY, USA, 273--274. https://doi.org/10.1145/1936652.1936712Google ScholarDigital Library
- Khanh-Duy Le, Mahsa Paknezhad, Paweł W. Woźniak, Maryam Azh, Gabrielė Kasparavičiūtė, Morten Fjeld, Shengdong Zhao, and Michael S. Brown. 2016. Towards Leaning Aware Interaction with Multitouch Tabletops. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI '16). ACM, New York, NY, USA, Article 4, 4 pages. https://doi.org/10.1145/2971485.2971553Google Scholar
- Wen-Hung Liao. 2009. A Framework for Attention-based Personal Photo Manager. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics (SMC '09). IEEE Press, Piscataway, NJ, USA, 2128--2132. http://dl.acm.org/citation.cfm?id=1732003.1732067Google ScholarDigital Library
- Daniel J. Liebling and Susan T. Dumais. 2014. Gaze and Mouse Coordination in Everyday Work. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (UbiComp '14 Adjunct). ACM, New York, NY, USA, 1141--1150. https://doi.org/10.1145/2638728.2641692Google Scholar
- Roman Lissermann, Jochen Huber, Martin Schmitz, Jürgen Steimle, and Max Mühlhäuser. 2014. Permulin: Mixed-focus Collaboration on Multi-view Tabletops. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 3191--3200. https://doi.org/10.1145/2556288.2557405Google ScholarDigital Library
- Hao Lu and Yang Li. 2015. Gesture On: Enabling Always-On Touch Gestures for Fast Mobile Access from the Device Standby Mode. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 3355--3364. https://doi.org/10.1145/2702123.2702610Google ScholarDigital Library
- Paul P. Maglio, Teenie Matlock, Christopher S. Campbell, Shumin Zhai, and Barton A. Smith. 2000. Gaze and Speech in Attentive User Interfaces. In Proceedings of the Third International Conference on Advances in Multimodal Interfaces (ICMI '00). Springer-Verlag, London, UK, UK, 1--7. http://dl.acm.org/citation.cfm?id=645524.656806Google Scholar
- Alexander Mariakakis, Mayank Goel, Md Tanvir Islam Aumi, Shwetak N. Patel, and Jacob O. Wobbrock. 2015. SwitchBack: Using Focus and Saccade Tracking to Guide Users' Attention for Mobile Task Resumption. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 2953--2962. https://doi.org/10.1145/2702123.2702539Google Scholar
- Paul Marshall, Richard Morris, Yvonne Rogers, Stefan Kreitmayer, and Matt Davies. 2011. Rethinking 'Multi-user': An In-the-wild Study of How Groups Approach a Walk-up-and-use Tabletop Interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 3033--3042. https://doi.org/10.1145/1978942.1979392Google ScholarDigital Library
- Juha Matero and Ashley Colley. 2012. Identifying Unintentional Touches on Handheld Touch Screen Devices. In Proceedings of the Designing Interactive Systems Conference (DIS '12). ACM, New York, NY, USA, 506--509. https://doi.org/10.1145/2317956.2318031Google ScholarDigital Library
- Fabrice Matulic and Moira Norrie. 2012. Empirical Evaluation of Uni- and Bimodal Pen and Touch Interaction Properties on Digital Tabletops. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces (ITS '12). ACM, New York, NY, USA, 143--152. https://doi.org/10.1145/2396636.2396659Google ScholarDigital Library
- Fabrice Matulic, Daniel Vogel, and Raimund Dachselt. 2017. Hand Contact Shape Recognition for Posture-Based Tabletop Widgets and Interaction. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces (ISS '17). ACM, New York, NY, USA, 3--11. https://doi.org/10.1145/3132272.3134126Google ScholarDigital Library
- Michael Mauderer and Florian Daiber. 2013. Combining Touch and Gaze for Distant Selection in a Tabletop Setting. (2013).Google Scholar
- Pranav Mistry, Pattie Maes, and Liyan Chang. 2009. WUW - Wear Ur World: A Wearable Gestural Interface. In CHI '09 Extended Abstracts on Human Factors in Computing Systems (CHI EA '09). ACM, New York, NY, USA, 4111--4116. https://doi.org/10.1145/1520340.1520626Google ScholarDigital Library
- Erik Murphy-Chutorian and Mohan Manubhai Trivedi. 2009. Head pose estimation in computer vision: A survey. IEEE transactions on pattern analysis and machine intelligence 31, 4 (2009), 607--626.Google ScholarDigital Library
- Jakob Nielsen. 1994. Usability inspection methods. In Conference companion on Human factors in computing systems. ACM, 413--414.Google ScholarDigital Library
- Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gaze-touch: Combining Gaze with Multi-touch for Interaction on the Same Surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 509--518. https://doi.org/10.1145/2642918.2647397Google ScholarDigital Library
- Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, and Hans Gellersen. 2015. Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15). ACM, New York, NY, USA, 373--383. https://doi.org/10.1145/2807442.2807460Google ScholarDigital Library
- Ken Pfeuffer and Hans Gellersen. 2016. Gaze and Touch Interaction on Tablets. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 301--311. https://doi.org/10.1145/2984511.2984514Google ScholarDigital Library
- Natural Point. 2011. Optitrack. Natural Point, Inc.,[Online]. Available: http://www. naturalpoint. com/optitrack/.[Accessed 22 2 2014] (2011).Google Scholar
- Argenis Ramirez Gomez and Hans Gellersen. 2019. SuperVision: Playing with Gaze Aversion and Peripheral Vision. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 473, 12 pages. https://doi.org/10.1145/3290605.3300703Google ScholarDigital Library
- Neil Robertson and Ian Reid. 2006. Estimating gaze direction from low-resolution faces in video. Computer Vision-ECCV 2006 (2006), 402--415.Google Scholar
- Hasibullah Sahibzada, Eva Hornecker, Florian Echtler, and Patrick Tobias Fischer. 2017. Designing Interactive Advertisements for Public Displays. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 1518--1529. https://doi.org/10.1145/3025453.3025531Google ScholarDigital Library
- Johannes Schöning, Peter Brandl, Florian Daiber, Florian Echtler, Otmar Hilliges, Jonathan Hook, Markus Löchtefeld, Nima Motamedi, Laurence Muller, Patrick Olivier, et al. 2008. Multi-touch surfaces: A technical guide. IEEE Tabletops and Interactive Surfaces 2, 11 (2008).Google Scholar
- Julia Schwarz, Charles Claudius Marais, Tommer Leyvand, Scott E. Hudson, and Jennifer Mankoff. 2014. Combining Body Pose, Gaze, and Gesture to Determine Intention to Interact in Vision-based Interfaces. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 3443--3452. https://doi.org/10.1145/2556288.2556989Google ScholarDigital Library
- Julia Schwarz, Robert Xiao, Jennifer Mankoff, Scott E. Hudson, and Chris Harrison. 2014. Probabilistic Palm Rejection Using Spatiotemporal Touch Features and Iterative Classification. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 2009--2012. https://doi.org/10.1145/2556288.2557056Google ScholarDigital Library
- Barış Serim and Giulio Jacucci. 2019. Explicating "Implicit Interaction": An Examination of the Concept and Challenges for Research. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 417, 16 pages. https://doi.org/10.1145/3290605.3300647Google ScholarDigital Library
- Ludwig Sidenmark and Hans Gellersen. 2019. Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection. In Proceedings of the 32Nd Annual ACM Symposium on User Interface Software and Technology (UIST '19). ACM, New York, NY, USA, 1161--1174. https://doi.org/10.1145/3332165.3347921Google ScholarDigital Library
- Ludwig Sidenmark and Anders Lundström. 2019. Gaze Behaviour on Interacted Objects During Hand Interaction in Virtual Reality for Eye Tracking Calibration. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (ETRA '19). ACM, New York, NY, USA, Article 6, 9 pages. https://doi.org/10.1145/3314111.3319815Google ScholarDigital Library
- Misha Sra, Xuhai Xu, Aske Mottelson, and Pattie Maes. 2018. VMotion: Designing a Seamless Walking Experience in VR. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS '18). ACM, New York, NY, USA, 59--70. https://doi.org/10.1145/3196709.3196792Google ScholarDigital Library
- Dave M Stampe. 1993. Heuristic filtering and reliable calibration methods for video-based pupil-tracking systems. Behavior Research Methods, Instruments, & Computers 25, 2 (1993), 137--142.Google ScholarCross Ref
- Sophie Stellmach and Raimund Dachselt. 2013. Still Looking: Investigating Seamless Gaze-supported Selection, Positioning, and Manipulation of Distant Targets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 285--294. https://doi.org/10.1145/2470654.2470695Google ScholarDigital Library
- Rainer Stiefelhagen, Michael Finke, Jie Yang, and Alex Waibel. 1999. From gaze to focus of attention. In Visual Information and Information Systems. Springer, 765--772.Google Scholar
- Lucia Terrenghi, David Kirk, Abigail Sellen, and Shahram Izadi. 2007. Affordances for Manipulation of Physical Versus Digital Media on Interactive Surfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '07). ACM, New York, NY, USA, 1157--1166. https://doi.org/10.1145/1240624.1240799Google ScholarDigital Library
- Reed L Townsend, Alexander J Kolmykov-Zotov, Steven P Dodge, and Bryan D Scott. 2011. Unintentional touch rejection. US Patent 8,018,440.Google Scholar
- Jayson Turner, Jason Alexander, Andreas Bulling, and Hans Gellersen. 2015. Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-Scale-Translate Tasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 4179--4188. https://doi.org/10.1145/2702123.2702355Google ScholarDigital Library
- Jayson Turner, Andreas Bulling, and Hans Gellersen. 2011. Combining Gaze with Manual Interaction to Extend Physical Reach. In Proceedings of the 1st International Workshop on Pervasive Eye Tracking & Mobile Eye-based Interaction (PETMEI '11). ACM, New York, NY, USA, 33--36. https://doi.org/10.1145/2029956.2029966Google ScholarDigital Library
- Simon Voelker, Andrii Matviienko, Johannes Schöning, and Jan Borchers. 2015. Combining Direct and Indirect Touch Input for Interactive Workspaces Using Gaze Input. In Proceedings of the 3rd ACM Symposium on Spatial User Interaction (SUI '15). ACM, New York, NY, USA, 79--88. https://doi.org/10.1145/2788940.2788949Google ScholarDigital Library
- Daniel Vogel and Ravin Balakrishnan. 2010. Occlusion-aware Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 263--272. https://doi.org/10.1145/1753326.1753365Google Scholar
- Xuhai Xu, Ahmed Hassan Awadallah, Susan T. Dumais, Farheen Omar, Bogdan Popp, Robert Routhwaite, and Farnaz Jahanbakhsh. 2020. Understanding User Behavior For Document Recommendation. In The World Wide Web Conference (WWW '20). Association for Computing Machinery, New York, NY, USA, 7. https://doi.org/10.1145/3366423.3380071Google Scholar
- Xuhai Xu, Prerna Chikersal, Afsaneh Doryab, Daniella K. Villalba, Janine M. Dutcher, Michael J. Tumminia, Tim Althoff, Sheldon Cohen, Kasey G. Creswell, J. David Creswell, and et al. 2019. Leveraging Routine Behavior and Contextually-Filtered Features for Depression Detection among College Students. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 3, Article Article 116 (Sept. 2019), 33 pages. https://doi.org/10.1145/3351274Google Scholar
- Xuhai Xu, Alexandru Dancu, Pattie Maes, and Suranga Nanayakkara. 2018. Hand Range Interface: Information Always at Hand with a Body-centric Mid-air Input Surface. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '18). ACM, New York, NY, USA, Article 5, 12 pages. https://doi.org/10.1145/3229434.3229449Google ScholarDigital Library
- Xuhai Xu, Haitian Shi, Xin Yi, Wenjia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, and Anind K. Dey. 2020. EarBuddy: Enabling On-Face Interaction via Wireless Earbuds. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 14. https://doi.org/10.1145/3313831.3376836Google ScholarDigital Library
- Xuhai Xu, Chun Yu, Anind K. Dey, and Jennifer Mankoff. 2019. Clench Interface: Novel Biting Input Techniques. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 275, 12 pages. https://doi.org/10.1145/3290605.3300505Google ScholarDigital Library
- Yukang Yan, Yingtian Shi, Chun Yu, and Yuanchun Shi. 2020. HeadCross: Exploring Head-Based Crossing Selection on Head-Mounted Displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 1 (March 2020), 22. https://doi.org/10.1145/3380983Google ScholarDigital Library
- Yukang Yan, Chun Yu, Xiaojuan Ma, Xin Yi, Ke Sun, and Yuanchun Shi. 2018. VirtualGrasp: Leveraging Experience of Interacting with Physical Objects to Facilitate Digital Object Retrieval. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 78, 13 pages. https://doi.org/10.1145/3173574.3173652Google ScholarDigital Library
- Yukang Yan, Chun Yu, Wengrui Zheng, Ruining Tang, Xuhai Xu, and Yuanchun Shi. 2020. FrownOnError: Interrupting Responses from Smart Speakers by Facial Expressions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 14. https://doi.org/10.1145/3313831.3376810Google ScholarDigital Library
- Alfred L Yarbus. 1967. Eye movements during perception of complex objects. Springer.Google Scholar
- Yaning Luo Zejiang Liu. 2012. Nanometer touch film production method.Google Scholar
- Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and Gaze Input Cascaded (MAGIC) Pointing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA, 246--253. https://doi.org/10.1145/302979.303053Google ScholarDigital Library
- Yang Zhang, Michel Pahud, Christian Holz, Haijun Xia, Gierad Laput, Michael McGuffin, Xiao Tu, Andrew Mittereder, Fei Su, William Buxton, and Ken Hinckley. 2019. Sensing Posture-Aware Pen+Touch Interaction on Tablets. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 55, 14 pages. https://doi.org/10.1145/3290605.3300285Google ScholarDigital Library
Index Terms
- Recognizing Unintentional Touch on Interactive Tabletop
Recommendations
Multimodal Segmentation on a Large Interactive Tabletop: Extending Interaction on Horizontal Surfaces with Gaze
ISS '16: Proceedings of the 2016 ACM International Conference on Interactive Surfaces and SpacesEye tracking is a promising input modality for interactive tabletops. However, issues such as eyelid occlusion and the viewing angle at distant positions present significant challenges for remote gaze tracking in this setting. We present the results of ...
TypeBoard: Identifying Unintentional Touch on Pressure-Sensitive Touchscreen Keyboards
UIST '21: The 34th Annual ACM Symposium on User Interface Software and TechnologyText input is essential in tablet computer interaction. However, tablet software keyboards face the problem of misrecognizing unintentional touch, which affects efficiency and usability [29, 49]. In this paper, we proposed TypeBoard, a pressure-...
Identifying unintentional touches on handheld touch screen devices
DIS '12: Proceedings of the Designing Interactive Systems ConferenceAccidental triggering of unwanted interaction when using a handheld touch screen device is a problem for many users. Accidental touches on capacitive touch screen based mobile telephones were analyzed in a user test. Patterns that are characteristic of ...
Comments