ABSTRACT
We present an alternate approach to smartwatch interactions using non-voice acoustic input captured by the device's microphone to complement touch and speech. Whoosh is an interaction technique that recognizes the type and length of acoustic events performed by the user to enable low-cost, hands-free, and rapid input on smartwatches. We build a recognition system capable of detecting non-voice events directed at and around the watch, including blows, sip-and-puff, and directional air swipes, without hardware modifications to the device. Further, inspired by the design of musical instruments, we develop a custom modification of the physical structure of the watch case to passively alter the acoustic response of events around the bezel; this physical redesign expands our input vocabulary with no additional electronics. We evaluate our technique across 8 users with 10 events exhibiting up to 90.5% ten-fold cross validation accuracy on an unmodified watch, and 14 events with 91.3% ten-fold cross validation accuracy with an instrumental watch case. Finally, we share a number of demonstration applications, including multi-device interactions, to highlight our technique with a real-time recognizer running on the watch.
Supplemental Material
Available for Download
Supplemental material.
- B. Amento, W. Hill, and L. Terveen. The Sound of One Hand: A Wrist-mounted Bio-acoustic Fingertip Gesture Interface. CHI EA '02. 724--725. Google ScholarDigital Library
- D. L. Ashbrook. 2010. Enabling Mobile Microinteractions. Ph.D. Dissertation. Georgia Institute of Technology, Atlanta, GA, USA. Google ScholarDigital Library
- W.-H. Chen. Blowatch: Blowable and Hands-Free Interaction for Smartwatches. CHI EA '15. 103--108. Google ScholarDigital Library
- X. A. Chen, T. Grossman, D. J. Wigdor, and G. Fitzmaurice. Duet: Exploring Joint Interactions on a Smart Phone and a Smart Watch. CHI '14. 159--168. Google ScholarDigital Library
- A. Dementyev and J. A. Paradiso. WristFlex: Low-power Gesture Input with Wrist-worn Pressure Sensors. UIST '14. 161--166. Google ScholarDigital Library
- T. Deyle, S. Palinko, E. S. Poole, and T. Starner. Hambone: A Bio-Acoustic Gesture Interface. ISWC '07. 3--10. Google ScholarDigital Library
- L. Fehr, W. E. Langbein, and S. B. Skaar. Adequacy of Power Wheelchair Control Interfaces for Persons with Severe Disabilities: A Clinical Survey. J. of Rehab R&D '00. 353--360.Google Scholar
- J. F. Filho, W. Prata, and T. Valle. Advances on Breathing Based Text Input for Mobile Devices. Universal Access in Human-Computer Interaction '15. 279--287.Google Scholar
- M. Gordon, T. Ouyang, and S. Zhai. WatchWriter: Tap and Gesture Typing on a Smartwatch Miniature Keyboard with Statistical Decoding. CHI '16. 3817--3821. Google ScholarDigital Library
- S. Harada, J. A. Landay, J. Malkin, X. Li, and J. A. Bilmes. The Vocal Joystick:: Evaluation of Voice-based Cursor Control Techniques. Assets '06. 197--204. Google ScholarDigital Library
- C. Harrison, D. Tan, and D. Morris. Skinput: Appropriating the Body as an Input Surface. CHI '10. 453--462. Google ScholarDigital Library
- K. Hinckley, G. Ramos, F. Guimbretiere, P. Baudisch, and M. Smith. Stitching: Pen Gestures That Span Multiple Displays. AVI '04. 23--31. Google ScholarDigital Library
- T. Igarashi and J. F. Hughes. Voice As Sound: Using Non-verbal Voice Input for Interactive Control. UIST '01. 155--156. Google ScholarDigital Library
- A. K. Karlson and B. B. Bederson. ThumbSpace: Generalized One-Handed Input for Touchscreen-Based Mobile Devices. INTERACT '07. 324--338. Google ScholarDigital Library
- D. Kim, O. Hilliges, S. Izadi, A. D. Butler, J. Chen, I. Oikonomidis, and P. Olivier. Digits: Freehand 3D Interactions Anywhere Using a Wrist-worn Gloveless Sensor. UIST '12. 167--176. Google ScholarDigital Library
- G. Laput, E. Brockmeyer, S. E. Hudson, and C. Harrison. Acoustruments: Passive, Acoustically-Driven, Interactive Controls for Handheld Devices. CHI '15. 2161--2170. Google ScholarDigital Library
- G. Laput, R. Xiao, X. A. Chen, S. E. Hudson, and C. Harrison. Skin Buttons: Cheap, Small, Low-powered and Clickable Fixed-icon Laser Projectors. UIST '14. 389--394. Google ScholarDigital Library
- K. Lyons, D. Nguyen, D. Ashbrook, and S. White. Facet: A Multi-segment Wrist Worn System. UIST '12. 123--130. Google ScholarDigital Library
- I. Oakley and D. Lee. Interaction on the Edge: Offset Sensing for Small Devices. CHI '14. 169--178. Google ScholarDigital Library
- A. Olwal and S. Feiner. Interaction Techniques Using Prosodic Features of Speech and Audio Localization. IUI '05. 284--286. Google ScholarDigital Library
- S. N. Patel and G. D. Abowd. BLUI: Low-Cost Localized Blowable User Interfaces. UIST '07. 217--220. Google ScholarDigital Library
- S. T. Perrault, E. Lecolinet, J. Eagan, and Y. Guiard. Watchit: Simple Gestures and Eyes-free Interaction for Wristwatches and Bracelets. CHI '13. 1451--1460. Google ScholarDigital Library
- J. Rekimoto. GestureWrist and GesturePad: Unobtrusive Wearable Interaction Devices. ISWC '01. 21--27. Google ScholarDigital Library
- J. Ruiz and Y. Li. DoubleFlip: A Motion Gesture Delimiter for Mobile Interaction. CHI '11. 2717--2720. Google ScholarDigital Library
- D. Sakamoto, T. Komatsu, and T. Igarashi. Voice Augmented Manipulation: Using Paralinguistic Information to Manipulate Mobile Devices. MobileHCI '13. 69--78. Google ScholarDigital Library
- T. S. Saponas, D. S. Tan, D. Morris, R. Balakrishnan, J. Turner, and J. A. Landay. Enabling Always-Available Input with Muscle-Computer Interfaces. UIST '09. 167--176. Google ScholarDigital Library
- K. A. Siek, Y. Rogers, and K. H. Connelly. Fat Finger Worries: How Older and Younger Users Physically Interact with PDAs. INTERACT '05. 267--280. Google ScholarDigital Library
- A. J. Sporka, S. H. Kurniawan, M. Mahmud, and P. Slavik. Non-speech Input and Speech Recognition for Real-time Control of Computer Games. Assets '06. 213--220. Google ScholarDigital Library
- H. Wallop. CES 2010: breath-controlled mobile phones to be made? Telegraph.co.uk. Accessed: 2016-07-14. 2010.Google Scholar
- R. Xiao, G. Laput, and C. Harrison. Expanding the Input Expressivity of Smartwatches with Mechanical Pan, Twist, Tilt and Click. CHI '14. 193--196. Google ScholarDigital Library
Index Terms
Whoosh: non-voice acoustics for low-cost, hands-free, and rapid input on smartwatches
Recommendations
PressTact: Side Pressure-Based Input for Smartwatch Interaction
CHI EA '16: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing SystemsSmartwatches have gained a lot of public interest as one of the most popular wearable devices in recent times, but their diminutive touch screens mar the user experiences. The small screen of watch suffers from visual occlusion and the fat finger ...
The tongue and ear interface: a wearable system for silent speech recognition
ISWC '14: Proceedings of the 2014 ACM International Symposium on Wearable ComputersWe address the problem of performing silent speech recognition where vocalized audio is not available (e.g. due to a user's medical condition) or is highly noisy (e.g. during firefighting or combat). We describe our wearable system to capture tongue and ...
SynchroWatch: One-Handed Synchronous Smartwatch Gestures Using Correlation and Magnetic Sensing
SynchroWatch is a one-handed interaction technique for smartwatches that uses rhythmic correlation between a user's thumb movement and on-screen blinking controls. Our technique uses magnetic sensing to track the synchronous extension and reposition of ...
Comments