skip to main content
10.1145/3340555.3353740acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Motion Eavesdropper: Smartwatch-based Handwriting Recognition Using Deep Learning

Published:14 October 2019Publication History

ABSTRACT

This paper focuses on the real-life scenario that people are handwriting while wearing small mobile devices on their wrists. We explore the possibility of eavesdropping privacy-related information based on motion signals. To achieve this, we elaborately develop a new deep learning-based motion sensing framework with four major components, i.e., recorder, signal preprocessor, feature extractor and handwriting recognizer. First, we integrate a series of simple yet effective signal processing techniques to purify the sensory data to reflect the kinetic property of a handwriting motion. Then we take advantage of properties of Multimodal Convolutional Neural Network (MCNN) to extract abstract features. After that, a bidirectional Long Short-Term Memory (BLSTM) network is exploited to model temporal dynamics. Finally, we incorporate Connectionist Temporal Classification (CTC) algorithm to realize end-to-end handwriting recognition. We prototype our design using a commercial off-the-shelf smartwatch and carry out extensive experiments. The encouraging results reveal that our system can robustly achieve an average accuracy of 64% at character-level and 71.9% at word-level, and 56.6% accuracy rate for words unseen in the training set under certain conditions, which expose the danger of privacy disclosure in daily lives.

References

  1. Dmitri Asonov and Rakesh Agrawal. 2004. Keyboard Acoustic Emanations. In IEEE Symposium on Security & Privacy.Google ScholarGoogle Scholar
  2. Yigael Berger, Avishai Wool, and Arie Yeredor. 2006. Dictionary attacks using keyboard acoustic emanations. In Acm Conference on Computer & Communications Security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Xu Chao, Parth H. Pathak, and Prasant Mohapatra. 2015. Finger-writing with Smartwatch:A Case for Finger and Hand Gesture Recognition using Smartwatch. (2015).Google ScholarGoogle Scholar
  4. Word Frequency Dat. 2016. Corpus of Contemporary American English. Retrieved March 13, 2016 from http://www.wordfrequency.infoGoogle ScholarGoogle Scholar
  5. Thomas Deselaers, Daniel Keysers, Jan Hosang, and Henry A. Rowley. 2015. GyroPen: Gyroscopes for Pen-Input With Mobile Phones. IEEE Transactions on Human-Machine Systems 45, 2 (2015), 263–271.Google ScholarGoogle ScholarCross RefCross Ref
  6. Haishi Du, Ping Li, Hao Zhou, Wei Gong, Gan Luo, and Panlong Yang. 2018. Wordrecorder: Accurate acoustic-based handwriting recognition using deep learning. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEE, 1448–1456.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Ahmed El-Sawy, Hazem El-Bakry, and Mohamed Loey. 2016. CNN for Handwritten Arabic Digits Recognition Based on LeNet-5. (2016).Google ScholarGoogle Scholar
  8. Alex Graves and Faustino Gomez. 2006. Connectionist temporal classification:labelling unsegmented sequence data with recurrent neural networks. In International Conference on Machine Learning.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Tzipora Halevi and Nitesh Saxena. 2014. Keyboard acoustic side channel attacks: exploring realistic and security-sensitive scenarios. International Journal of Information Security 14, 5 (2014), 1–14.Google ScholarGoogle Scholar
  10. Wang He, Tsung Te Lai, and Romit Roy Choudhury. 2015. MoLe: Motion Leaks through Smartwatch Sensors. In International Conference on Mobile Computing & Networking.Google ScholarGoogle Scholar
  11. S Hochreiter and J Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735–1780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In International Conference on International Conference on Machine Learning.Google ScholarGoogle Scholar
  13. Mobvoi AI Lab. 2012. Ticwatch. https://www.mobvoi.com/Google ScholarGoogle Scholar
  14. Chen Li, Wang Song, Fan Wei, Jun Sun, and Satoshi Naoi. 2015. Beyond human recognition: A CNN-based framework for handwritten character recognition. In Iapr Asian Conference on Pattern Recognition.Google ScholarGoogle Scholar
  15. Xinye Lin, Yixin Chen, Xiao Wen Chang, Xue Liu, and Xiaodong Wang. 2018. SHOW: Smart Handwriting on Watches. Proceedings of the Acm on Interactive Mobile Wearable & Ubiquitous Technologies 1(2018).Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Anindya Maiti, Murtuza Jadliwala, Jibo He, and Igor Bilogrevic. 2015. (Smart)watch your taps:side-channel keystroke inference attacks using smartwatches. In Acm International Symposium on Wearable Computers.Google ScholarGoogle Scholar
  17. Arik Poznanski and Lior Wolf. 2016. CNN-N-Gram for HandwritingWord Recognition. (2016).Google ScholarGoogle Scholar
  18. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. 2673–2681 pages.Google ScholarGoogle Scholar
  19. Qingxin Xia, Feng Hong, Yuan Feng, and Zhongwen Guo. 2018. MotionHacker: Motion sensor based eavesdropping on handwriting via smartwatch. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). IEEE, 468–473.Google ScholarGoogle ScholarCross RefCross Ref
  20. Xuefeng Xiao, Lianwen Jin, Yafeng Yang, Weixin Yang, Jun Sun, and Tianhai Chang. 2017. Building fast and compact convolutional neural networks for offline handwritten Chinese character recognition. Pattern Recognition 72(2017), 72–81.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Tuo Yu, Haiming Jin, and Klara Nahrstedt. 2016. WritingHacker:audio based eavesdropping of handwriting via mobile devices. In Acm International Joint Conference on Pervasive & Ubiquitous Computing.Google ScholarGoogle Scholar
  22. Maotian Zhang, Panlong Yang, Tian Chang, Shi Lei, Shaojie Tang, and Xiao Fu. 2015. SoundWrite:Text Input on Surfaces through Mobile Acoustic Sensing. In International Workshop on Experiences with the Design & Implementation of Smart Objects.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Li Zhuang, Feng Zhou, and J. D. Tygar. 2005. Keyboard acoustic emanations revisited. In Acm Conference on Computer & Communications Security.Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICMI '19: 2019 International Conference on Multimodal Interaction
    October 2019
    601 pages
    ISBN:9781450368605
    DOI:10.1145/3340555

    Copyright © 2019 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 14 October 2019

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate453of1,080submissions,42%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format