skip to main content
research-article

Leveraging Smartwatch and Earbuds Gesture Capture to Support Wearable Interaction

Published: 14 November 2022 Publication History

Abstract

Due to the proliferation of smart wearables, it is now the case that designers can explore novel ways that devices can be used in combination by end-users. In this paper, we explore the gestural input enabled by the combination of smart earbuds coupled with a proximal smartwatch. We identify a consensus set of gestures and a taxonomy of the types of gestures participants create through an elicitation study. In a follow-on study conducted on Amazon's Mechanical Turk, we explore the social acceptability of gestures enabled by watch+earbud gesture capture. While elicited gestures continue to be simple, discrete, in-context actions, we find that elicited input is frequently abstract, varies in size and duration, and is split almost equally between on-body, proximal, and more distant actions. Together, our results provide guidelines for on-body, near-ear, and in-air input using earbuds and a smartwatch to support gesture capture.

References

[1]
David Ahlström, Khalad Hasan, and Pourang Irani. 2014. Are You Comfortable Doing That? Acceptance Studies of around-Device Gestures in and for Public Settings. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services (MobileHCI ’14). Association for Computing Machinery, New York, NY, USA. 193–202. isbn:9781450330046 https://doi.org/10.1145/2628363.2628381
[2]
Fouad Alallah, Ali Neshati, Yumiko Sakamoto, Khalad Hasan, Edward Lank, Andrea Bunt, and Pourang Irani. 2018. Performer vs. Observer: Whose Comfort Level Should We Consider When Examining the Social Acceptability of Input Modalities for Head-Worn Display? In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology (VRST ’18). Association for Computing Machinery, New York, NY, USA. Article 10, 9 pages. isbn:9781450360869 https://doi.org/10.1145/3281505.3281541
[3]
Shaikh Shawon Arefin Shimon, Courtney Lutton, Zichun Xu, Sarah Morrison-Smith, Christina Boucher, and Jaime Ruiz. 2016. Exploring Non-Touchscreen Gestures for Smartwatches. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). Association for Computing Machinery, New York, NY, USA. 3822–3833. isbn:9781450333627 https://doi.org/10.1145/2858036.2858385
[4]
Sharon Baurley. 2004. Interactive and experiential design in smart textile products and applications. Personal and Ubiquitous Computing, 8, 3 (2004), 274–281.
[5]
Vincent Becker, Linus Fessler, and Gábor Sörös. 2019. GestEar: Combining Audio and Motion Sensing for Gesture Recognition on Smartwatches. In Proceedings of the 23rd International Symposium on Wearable Computers (ISWC ’19). Association for Computing Machinery, New York, NY, USA. 10–19. isbn:9781450368704 https://doi.org/10.1145/3341163.3347735
[6]
Vincent Buil, Gerard Hollemans, and Sander van de Wijdeven. 2005. Headphones with touch control. In Proceedings of the 7th international conference on Human computer interaction with mobile devices & services. Association for Computing Machinery, New York, NY, USA. 377–378.
[7]
Edwin Chan, Teddy Seyed, Wolfgang Stuerzlinger, Xing-Dong Yang, and Frank Maurer. 2016. User Elicitation on Single-Hand Microgestures. Association for Computing Machinery, New York, NY, USA. 3403–3414. isbn:9781450333627 https://doi.org/10.1145/2858036.2858589
[8]
Yu-Chun Chen, Chia-Ying Liao, Shuo-wen Hsu, Da-Yuan Huang, and Bing-Yu Chen. 2020. Exploring User Defined Gestures for Ear-Based Interactions. Proc. ACM Hum.-Comput. Interact., 4, ISS (2020), Article 186, Nov., 20 pages. https://doi.org/10.1145/3427314
[9]
Paolo Dabove, Vincenzo Di Pietra, Marco Piras, Ansar Abdul Jabbar, and Syed Ali Kazim. 2018. Indoor positioning using Ultra-wide band (UWB) technologies: Positioning accuracies and sensors’ performances. In 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS). 175–184. https://doi.org/10.1109/PLANS.2018.8373379
[10]
Bogdan-Florin Gheran, Jean Vanderdonckt, and Radu-Daniel Vatavu. 2018. Gestures for Smart Rings: Empirical Results, Insights, and Design Implications. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS ’18). Association for Computing Machinery, New York, NY, USA. 623–635. isbn:9781450351980 https://doi.org/10.1145/3196709.3196741
[11]
Jun Gong, Xing-Dong Yang, and Pourang Irani. 2016. WristWhirl: One-Handed Continuous Smartwatch Input Using Wrist Gestures. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST ’16). Association for Computing Machinery, New York, NY, USA. 861–872. isbn:9781450341899 https://doi.org/10.1145/2984511.2984563
[12]
Sibel Deren Guler, Madeline Gannon, and Kate Sicchio. 2016. A Brief History of Wearables. Apress, Berkeley, CA. 3–10. isbn:978-1-4842-1808-2 https://doi.org/10.1007/978-1-4842-1808-2_1
[13]
Peter Hamilton and Daniel J. Wigdor. 2014. Conductor: Enabling and Understanding Cross-Device Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). Association for Computing Machinery, New York, NY, USA. 2773–2782. isbn:9781450324731 https://doi.org/10.1145/2556288.2557170
[14]
Teng Han, Khalad Hasan, Keisuke Nakamura, Randy Gomez, and Pourang Irani. 2017. SoundCraft: Enabling Spatial Interactions on Smartwatches Using Hand Generated Acoustics. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST ’17). Association for Computing Machinery, New York, NY, USA. 579–591. isbn:9781450349819 https://doi.org/10.1145/3126594.3126612
[15]
Chris Harrison and Scott E. Hudson. 2009. Abracadabra: Wireless, High-Precision, and Unpowered Finger Input for Very Small Mobile Devices. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology (UIST ’09). Association for Computing Machinery, New York, NY, USA. 121–124. isbn:9781605587455 https://doi.org/10.1145/1622176.1622199
[16]
Frederic Kerber, Markus Löchtefeld, Antonio Krüger, Jess McIntosh, Charlie McNeill, and Mike Fraser. 2016. Understanding Same-Side Interactions with Wrist-Worn Devices. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI ’16). Association for Computing Machinery, New York, NY, USA. Article 28, 10 pages. isbn:9781450347631 https://doi.org/10.1145/2971485.2971519
[17]
Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, and Bruce H. Thomas. 2017. EarTouch: Turning the Ear into an Input Surface. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’17). Association for Computing Machinery, New York, NY, USA. Article 27, 6 pages. isbn:9781450350754 https://doi.org/10.1145/3098279.3098538
[18]
Jungsoo Kim, Jiasheng He, Kent Lyons, and Thad Starner. 2007. The gesture watch: A wireless contact-free gesture based wrist interface. In 2007 11th IEEE International Symposium on Wearable Computers. 15–22.
[19]
DoYoung Lee, Youryang Lee, Yonghwan Shin, and Ian Oakley. 2018. Designing Socially Acceptable Hand-to-Face Input. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST ’18). Association for Computing Machinery, New York, NY, USA. 711–723. isbn:9781450359481 https://doi.org/10.1145/3242587.3242642
[20]
Jaime Lien, Nicholas Gillian, M. Emre Karagozler, Patrick Amihood, Carsten Schwesig, Erik Olson, Hakim Raja, and Ivan Poupyrev. 2016. Soli: Ubiquitous Gesture Sensing with Millimeter Wave Radar. ACM Trans. Graph., 35, 4 (2016), Article 142, July, 19 pages. issn:0730-0301 https://doi.org/10.1145/2897824.2925953
[21]
Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, Suranga Nanayakkara, and Max Mühlhäuser. 2014. EarPut: Augmenting Ear-Worn Devices for Ear-Based Interaction. In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design (OzCHI ’14). Association for Computing Machinery, New York, NY, USA. 300–307. isbn:9781450306539 https://doi.org/10.1145/2686612.2686655
[22]
Dong Ma, Andrea Ferlini, and Cecilia Mascolo. 2021. OESense: Employing Occlusion Effect for in-Ear Human Sensing. 175–187. isbn:9781450384438 https://doi.org/10.1145/3458864.3467680
[23]
Nicolai Marquardt, Ken Hinckley, and Saul Greenberg. 2012. Cross-Device Interaction via Micro-Mobility and f-Formations. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST ’12). Association for Computing Machinery, New York, NY, USA. 13–22. isbn:9781450315807 https://doi.org/10.1145/2380116.2380121
[24]
Katsutoshi Masai, Yuta Sugiura, and Maki Sugimoto. 2018. FaceRubbing: Input Technique by Rubbing Face Using Optical Sensors on Smart Eyewear for Facial Expression Recognition. In Proceedings of the 9th Augmented Human International Conference (AH ’18). Association for Computing Machinery, New York, NY, USA. Article 23, 5 pages. isbn:9781450354158 https://doi.org/10.1145/3174910.3174924
[25]
Donald McMillan, Barry Brown, Airi Lampinen, Moira McGregor, Eve Hoggan, and Stefania Pizza. 2017. Situating Wearables: Smartwatch Use in Context. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). Association for Computing Machinery, New York, NY, USA. 3582–3594. isbn:9781450346559 https://doi.org/10.1145/3025453.3025993
[26]
C. Metzger, M. Anderson, and T. Starner. 2004. FreeDigiter: a contact-free device for gesture control. In Eighth International Symposium on Wearable Computers. 1, Association for Computing Machinery, New York, NY, USA. 18–21. https://doi.org/10.1109/ISWC.2004.23
[27]
Meredith Ringel Morris, Jacob O. Wobbrock, and Andrew D. Wilson. 2010. Understanding Users’ Preferences for Surface Gestures. In Proceedings of Graphics Interface 2010 (GI ’10). Canadian Information Processing Society, Toronto, Ont., Canada, Canada. 261–268. isbn:978-1-56881-712-5 http://dl.acm.org/citation.cfm?id=1839214.1839260
[28]
Vijayakumar Nanjappan, Rongkai Shi, Hai-Ning Liang, Kim King-Tong Lau, Yong Yue, and Katie Atkinson. 2019. Towards a taxonomy for in-vehicle interactions using wearable smart textiles: insights from a user-elicitation study. Multimodal Technologies and Interaction, 3, 2 (2019), 33.
[29]
Irina Popovici and Radu-Daniel Vatavu. 2019. Understanding Users’ Preferences for Augmented Reality Television. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 269–278. https://doi.org/10.1109/ISMAR.2019.00024
[30]
Julie Rico and Stephen Brewster. 2010. Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability. Association for Computing Machinery, New York, NY, USA. 887–896. isbn:9781605589299 https://doi.org/10.1145/1753326.1753458
[31]
Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-Defined Motion Gestures for Mobile Interaction. Association for Computing Machinery, New York, NY, USA. 197–206. isbn:9781450302289 https://doi.org/10.1145/1978942.1978971
[32]
Jaime Ruiz and Daniel Vogel. 2015. Soft-Constraints to Reduce Legacy and Performance Bias to Elicit Whole-Body Gestures with Low Arm Fatigue. Association for Computing Machinery, New York, NY, USA. 3347–3350. isbn:9781450331456 https://doi.org/10.1145/2702123.2702583
[33]
Julia Schwarz, Chris Harrison, Scott Hudson, and Jennifer Mankoff. 2010. Cord Input: An Intuitive, High-Accuracy, Multi-Degree-of-Freedom Input Method for Mobile Devices. Association for Computing Machinery, New York, NY, USA. 1657–1660. isbn:9781605589299 https://doi.org/10.1145/1753326.1753573
[34]
Marcos Serrano, Barrett M. Ens, and Pourang P. Irani. 2014. Exploring the Use of Hand-to-Face Input for Interacting with Head-Worn Displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). Association for Computing Machinery, New York, NY, USA. 3181–3190. isbn:9781450324731 https://doi.org/10.1145/2556288.2556984
[35]
Arathi Sethumadhavan, Josh Lovejoy, and David Mondello. 2021. A framework for evaluating social acceptability of spatial computing devices. Interactions, 28, 3 (2021), 52–56.
[36]
Ke Sun, Yuntao Wang, Chun Yu, Yukang Yan, Hongyi Wen, and Yuanchun Shi. 2017. Float: One-Handed and Touch-Free Target Selection on Smartwatches. Association for Computing Machinery, New York, NY, USA. 692–704. isbn:9781450346559 https://doi.org/10.1145/3025453.3026027
[37]
Wouter Van Vlaenderen, Jens Brulmans, Jo Vermeulen, and Johannes Schöning. 2015. WatchMe: A Novel Input Method Combining a Smartwatch and Bimanual Interaction. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15). Association for Computing Machinery, New York, NY, USA. 2091–2095. isbn:9781450331463 https://doi.org/10.1145/2702613.2732789
[38]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2015. Formalizing Agreement Analysis for Elicitation Studies: New Measures, Significance Test, and Toolkit. Association for Computing Machinery, New York, NY, USA. 1325–1334. isbn:9781450331456 https://doi.org/10.1145/2702123.2702223
[39]
Radu-Daniel Vatavu and Ionut-Alexandru Zaiti. 2014. Leap Gestures for TV: Insights from an Elicitation Study. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (TVX ’14). Association for Computing Machinery, New York, NY, USA. 131–138. isbn:9781450328388 https://doi.org/10.1145/2602299.2602316
[40]
Ruolin Wang, Chun Yu, Xing-Dong Yang, Weijie He, and Yuanchun Shi. 2019. EarTouch: Facilitating Smartphone Use for Visually Impaired People in Mobile and Public Scenarios. Association for Computing Machinery, New York, NY, USA. 1–13. isbn:9781450359702 https://doi.org/10.1145/3290605.3300254
[41]
Hongyi Wen, Julian Ramos Rojas, and Anind K. Dey. 2016. Serendipity: Finger Gesture Recognition Using an Off-the-Shelf Smartwatch. Association for Computing Machinery, New York, NY, USA. 3847–3851. isbn:9781450333627 https://doi.org/10.1145/2858036.2858466
[42]
Yueting Weng, Chun Yu, Yingtian Shi, Yuhang Zhao, Yukang Yan, and Yuanchun Shi. 2021. FaceSight: Enabling Hand-to-Face Gesture Interaction on AR Glasses with a Downward-Facing Camera Vision. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). Association for Computing Machinery, New York, NY, USA. Article 10, 14 pages. isbn:9781450380966 https://doi.org/10.1145/3411764.3445484
[43]
Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-Defined Gestures for Surface Computing. Association for Computing Machinery, New York, NY, USA. 1083–1092. isbn:9781605582467 https://doi.org/10.1145/1518701.1518866
[44]
Xuhai Xu, Haitian Shi, Xin Yi, WenJia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, and Anind K. Dey. 2020. EarBuddy: Enabling On-Face Interaction via Wireless Earbuds. Association for Computing Machinery, New York, NY, USA. 1–14. isbn:9781450367080 https://doi.org/10.1145/3313831.3376836
[45]
Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, and Yuta Sugiura. 2017. CheekInput: Turning Your Cheek into an Input Surface by Embedded Optical Sensors on a Head-Mounted Display. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST ’17). Association for Computing Machinery, New York, NY, USA. Article 19, 8 pages. isbn:9781450355483 https://doi.org/10.1145/3139131.3139146
[46]
Hui-Shyong Yeo, Juyoung Lee, Hyung-il Kim, Aakar Gupta, Andrea Bianchi, Daniel Vogel, Hideki Koike, Woontack Woo, and Aaron Quigley. 2019. WRIST: Watch-Ring Interaction and Sensing Technique for Wrist Gestures and Macro-Micro Pointing. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’19). Association for Computing Machinery, New York, NY, USA. Article 19, 15 pages. isbn:9781450368254 https://doi.org/10.1145/3338286.3340130
[47]
Xin Yi, Chun Yu, Weijie Xu, Xiaojun Bi, and Yuanchun Shi. 2017. COMPASS: Rotational Keyboard on Non-Touch Smartwatches. Association for Computing Machinery, New York, NY, USA. 705–715. isbn:9781450346559 https://doi.org/10.1145/3025453.3025454
[48]
Cheng Zhang, AbdelKareem Bedri, Gabriel Reyes, Bailey Bercik, Omer T. Inan, Thad E. Starner, and Gregory D. Abowd. 2016. TapSkin: Recognizing On-Skin Input for Smartwatches. In Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces (ISS ’16). Association for Computing Machinery, New York, NY, USA. 13–22. isbn:9781450342483 https://doi.org/10.1145/2992154.2992187
[49]
Yu Zhang, Tao Gu, Chu Luo, Vassilis Kostakos, and Aruna Seneviratne. 2018. FinDroidHR: Smartwatch Gesture Input with Optical Heartrate Monitor. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 2, 1 (2018), Article 56, March, 42 pages. https://doi.org/10.1145/3191788

Cited By

View all
  • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024
  • (2024)EarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage SignalsProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676367(1-13)Online publication date: 13-Oct-2024
  • (2024)Exploring Uni-manual Around Ear Off-Device Gestures for EarablesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435138:1(1-29)Online publication date: 6-Mar-2024
  • Show More Cited By

Index Terms

  1. Leveraging Smartwatch and Earbuds Gesture Capture to Support Wearable Interaction

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 6, Issue ISS
      December 2022
      746 pages
      EISSN:2573-0142
      DOI:10.1145/3554337
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 14 November 2022
      Published in PACMHCI Volume 6, Issue ISS

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. earbuds
      2. elicitation study
      3. smartwatch
      4. wearables

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)103
      • Downloads (Last 6 weeks)9
      Reflects downloads up to 17 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024
      • (2024)EarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage SignalsProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676367(1-13)Online publication date: 13-Oct-2024
      • (2024)Exploring Uni-manual Around Ear Off-Device Gestures for EarablesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435138:1(1-29)Online publication date: 6-Mar-2024
      • (2023)Robust Finger Interactions with COTS Smartwatches via Unsupervised Siamese AdaptationProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606794(1-14)Online publication date: 29-Oct-2023
      • (2023)Gesture-Based InteractionHandbook of Human Computer Interaction10.1007/978-3-319-27648-9_20-1(1-47)Online publication date: 9-Feb-2023

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media