skip to main content
research-article

Radio2Text: Streaming Speech Recognition Using mmWave Radio Signals

Published: 27 September 2023 Publication History

Abstract

Millimeter wave (mmWave) based speech recognition provides more possibility for audio-related applications, such as conference speech transcription and eavesdropping. However, considering the practicality in real scenarios, latency and recognizable vocabulary size are two critical factors that cannot be overlooked. In this paper, we propose Radio2Text, the first mmWave-based system for streaming automatic speech recognition (ASR) with a vocabulary size exceeding 13,000 words. Radio2Text is based on a tailored streaming Transformer that is capable of effectively learning representations of speech-related features, paving the way for streaming ASR with a large vocabulary. To alleviate the deficiency of streaming networks unable to access entire future inputs, we propose the Guidance Initialization that facilitates the transfer of feature knowledge related to the global context from the non-streaming Transformer to the tailored streaming Transformer through weight inheritance. Further, we propose a cross-modal structure based on knowledge distillation (KD), named cross-modal KD, to mitigate the negative effect of low quality mmWave signals on recognition performance. In the cross-modal KD, the audio streaming Transformer provides feature and response guidance that inherit fruitful and accurate speech information to supervise the training of the tailored radio streaming Transformer. The experimental results show that our Radio2Text can achieve a character error rate of 5.7% and a word error rate of 9.4% for the recognition of a vocabulary consisting of over 13,000 words.

Supplemental Material

ZIP File - zhao
Supplemental movie, appendix, image and software files for, Radio2Text: Streaming Speech Recognition Using mmWave Radio Signals

References

[1]
Triantafyllos Afouras, Joon Son Chung, and Andrew Zisserman. 2020. ASR is All You Need: Cross-Modal Distillation for Lip Reading. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2143--2147. https://doi.org/10.1109/ICASSP40776. 2020.9054253
[2]
Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic Infinite Lookback Attention for Simultaneous Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). Florence, Italy, 1313--1323. https://doi.org/10.18653/v1/P19-1126
[3]
Yusuf Aytar, Carl Vondrick, and Antonio Torralba. 2016. SoundNet: Learning Sound Representations from Unlabeled Video. In Advances in Neural Information Processing Systems (NeurIPS), D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.), Vol. 29.
[4]
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems (NeurIPS) 33 (2020), 12449--12460.
[5]
Suryoday Basak and Mahanth Gowda. 2022. mmspy: Spying phone calls using mmwave radars. In 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 1211--1228.
[6]
Sören Becker, Marcel Ackermann, Sebastian Lapuschkin, Klaus-Robert Müller, and Wojciech Samek. 2018. Interpreting and explaining deep neural networks for classification of audio signals. arXiv preprint arXiv:1807.03418 (2018).
[7]
Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35, 8 (2013), 1798--1828.
[8]
M. Benzeghiba, R. De Mori, O. Deroo, S. Dupont, T. Erbes, D. Jouvet, L. Fissore, P. Laface, A. Mertins, C. Ris, R. Rose, V. Tyagi, and C. Wellekens. 2007. Automatic speech recognition and speech variability: A review. Speech Communication 49, 10 (2007), 763--786. https://doi.org/10.1016/j.specom.2007.02.006
[9]
Xi-Ren Cao and Ruey-wen Liu. 1996. General approach to blind source separation. IEEE Transactions on signal Processing 44, 3 (1996), 562--571.
[10]
Guobin Chen, Wongun Choi, Xiang Yu, Tony Han, and Manmohan Chandraker. 2017. Learning efficient object detection models with knowledge distillation. Advances in neural information processing systems (NeurIPS) 30 (2017).
[11]
Xie Chen, Yu Wu, Zhenghao Wang, Shujie Liu, and Jinyu Li. 2021. Developing Real-Time Streaming Transformer Transducer for Speech Recognition on Large-Scale Dataset. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 5904--5908. https://doi.org/10.1109/ICASSP39728.2021.9413535
[12]
E Colin Cherry. 1953. Some experiments on the recognition of speech, with one and with two ears. The Journal of the acoustical society of America 25, 5 (1953), 975--979.
[13]
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What Does BERT Look at? An Analysis of BERT's Attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Florence, Italy, 276--286. https://doi.org/10.18653/v1/W19-4828
[14]
Abe Davis, Michael Rubinstein, Neal Wadhwa, Gautham J. Mysore, Frédo Durand, and William T. Freeman. 2014. The Visual Microphone: Passive Recovery of Sound from Video. ACM Trans. Graph. 33, 4, Article 79 (jul 2014), 10 pages. https://doi.org/10.1145/2601097.2601119
[15]
Philippa Demonte. 2019. HARVARD speech corpus--audio recording 2019. University of Salford Collection (2019).
[16]
Lijie Fan, Tianhong Li, Yuan Yuan, and Dina Katabi. 2020. In-home daily-life captioning using radio signals. In Computer Vision--ECCV 2020: 16th European Conference. 105--123.
[17]
Long Fan, Lei Xie, Xinran Lu, Yi Li, Chuyu Wang, and Sanglu Lu. 2023. mmmic: Multi-modal speech recognition based on mmwave radar. In The 42nd International IEEE Conference on Computer Communications (INFOCOM).
[18]
Yiwen Feng, Kai Zhang, Chuyu Wang, Lei Xie, Jingyi Ning, and Shijia Chen. 2023. mmEavesdropper: Signal Augmentation-based Directional Eavesdropping with mmWave Radar. In IEEE INFOCOM 2023 - IEEE Conference on Computer Communications.
[19]
Ming Gao, Yajie Liu, Yike Chen, Yimin Li, Zhongjie Ba, Xian Xu, and Jinsong Han. 2022. InertiEAR: Automatic and Device-independent IMU-based Eavesdropping on Smartphones. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications. IEEE, 1129--1138.
[20]
Alex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711 (2012).
[21]
Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning. 369--376.
[22]
Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang. 2020. Conformer: Convolution-augmented Transformer for Speech Recognition. In Proc. Interspeech 2020. 5036--5040. https://doi.org/10.21437/Interspeech.2020-3015
[23]
Saurabh Gupta, Judy Hoffman, and Jitendra Malik. 2016. Cross modal distillation for supervision transfer. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2827--2836.
[24]
Awni Y Hannun, Andrew L Maas, Daniel Jurafsky, and Andrew Y Ng. 2014. First-pass large vocabulary continuous speech recognition using bi-directional recurrent dnns. arXiv preprint arXiv:1408.2873 (2014).
[25]
Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the Knowledge in a Neural Network. In NIPS Deep Learning and Representation Learning Workshop. http://arxiv.org/abs/1503.02531
[26]
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021), 3451--3460.
[27]
Hu Hu, Rui Zhao, Jinyu Li, Liang Lu, and Yifan Gong. 2020. Exploring Pre-Training with Alignments for RNN Transducer Based End-to-End Speech Recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 7079--7083. https://doi.org/10.1109/ICASSP40776.2020.9054663
[28]
Pengfei Hu, Wenhao Li, Riccardo Spolaor, and Xiuzhen Cheng. 2022. mmEcho: A mmWave-based Acoustic Eavesdropping Method. In 2023 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, 836--852.
[29]
Pengfei Hu, Yifan Ma, Panneer Selvam Santhalingam, Parth H Pathak, and Xiuzhen Cheng. 2022. Milliear: Millimeter-wave acoustic eavesdropping with unconstrained vocabulary. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications. IEEE, 11--20.
[30]
Pengfei Hu, Hui Zhuang, Panneer Selvam Santhalingam, Riccardo Spolaor, Parth Pathak, Guoming Zhang, and Xiuzhen Cheng. 2022. Accear: Accelerometer acoustic eavesdropping with unconstrained vocabulary. In 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 1757--1773.
[31]
Texas Instruments Incorporated. 2020. AWR1642: Single-chip 76-GHz to 81-GHz automotive radar sensor integrating DSP and MCU. https://www.ti.com/product/AWR1642
[32]
Texas Instruments Incorporated. 2020. DCA1000EVM: Real-time data-capture adapter for radar sensing evaluation module. https://www.ti.com/tool/DCA1000EVM
[33]
Md Amirul Islam, Sen Jia, and Neil D. B. Bruce. 2020. How much Position Information Do Convolutional Neural Networks Encode?. In International Conference on Learning Representations (ICLR).
[34]
Keith Ito and Linda Johnson. 2017. The LJ Speech Dataset. https://keithito.com/LJ-Speech-Dataset/.
[35]
Seifallah Jardak, Mohamed-Slim Alouini, Tero Kiuru, Mikko Metso, and Sajid Ahmed. 2019. Compact mmWave FMCW radar: Implementation and performance analysis. IEEE Aerospace and Electronic Systems Magazine 34, 2 (2019), 36--44.
[36]
Chengkun Jiang, Junchen Guo, Yuan He, Meng Jin, Shuai Li, and Yunhao Liu. 2020. mmVib: micrometer-level vibration measurement with mmwave radar. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking. 1--13.
[37]
Abdelwahed Khamis, Branislav Kusy, Chun Tung Chou, Mary-Louise McLaws, and Wen Hu. 2020. RFWash: a weakly supervised tracking of hand hygiene technique. In Proceedings of the 18th Conference on Embedded Networked Sensor Systems. 572--584.
[38]
Suyoun Kim, Takaaki Hori, and Shinji Watanabe. 2017. Joint CTC-attention based end-to-end speech recognition using multi-task learning. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 4835--4839.
[39]
Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of Neural Network Representations Revisited. In Proceedings of the 36th International Conference on Machine Learning (ICML), Vol. 97. PMLR, 3519--3529.
[40]
Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP). Association for Computational Linguistics, Brussels, Belgium, 66--71.
[41]
Andrew Kwong, Wenyuan Xu, and Kevin Fu. 2019. Hard drive of hearing: Disks that eavesdrop with a synthesized microphone. In 2019 IEEE symposium on security and privacy (SP). IEEE, 905--919.
[42]
Jian Liu, Hongbo Liu, Yingying Chen, Yan Wang, and Chen Wang. 2019. Wireless sensing for human activity: A survey. IEEE Communications Surveys & Tutorials 22, 3 (2019), 1629--1645.
[43]
Yifan Liu, Ke Chen, Chris Liu, Zengchang Qin, Zhenbo Luo, and Jingdong Wang. 2019. Structured knowledge distillation for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR). 2604--2613. https://doi.org/10.1109/CVPR.2019.00271
[44]
Yan Michalevsky, Dan Boneh, and Gabi Nakibly. 2014. Gyrophone: Recognizing speech from gyroscope signals. In 23rd USENIX Security Symposium (USENIX Security 14). 1053--1067.
[45]
Niko Moritz, Takaaki Hori, and Jonathan Le. 2020. Streaming Automatic Speech Recognition with the Transformer Model. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 6074--6078.
[46]
Muhammed Zahid Ozturk, Chenshu Wu, Beibei Wang, Min Wu, and KJ Ray Liu. 2023. Radio SES: mmWave-Based Audioradio Speech Enhancement and Separation System. IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023), 1333--1347.
[47]
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 5206--5210.
[48]
Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, et al. 2023. Scaling speech technology to 1,000+ languages. arXiv preprint arXiv:2305.13516 (2023).
[49]
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2015. FitNets: Hints for Thin Deep Nets. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. http://arxiv.org/abs/1412.6550
[50]
Nirupam Roy and Romit Roy Choudhury. 2016. Listening through a vibration motor. In Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services. 57--69.
[51]
Sriram Sami, Yimin Dai, Sean Rui Xiang Tan, Nirupam Roy, and Jun Han. 2020. Spying with your robot vacuum cleaner: eavesdropping via lidar sensors. In Proceedings of the 18th Conference on Embedded Networked Sensor Systems. 354--367.
[52]
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019).
[53]
Petre Stoica, Randolph L Moses, et al. 2005. Spectral analysis of signals. Vol. 452. Pearson Prentice Hall Upper Saddle River, NJ.
[54]
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient Knowledge Distillation for BERT Model Compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China, 4323--4332. https://doi.org/10.18653/v1/D19-1441
[55]
Shunqiao Sun, Athina P. Petropulu, and H. Vincent Poor. 2020. MIMO Radar for Advanced Driver-Assistance Systems and Autonomous Driving: Advantages and Challenges. IEEE Signal Processing Magazine 37, 4 (2020), 98--117. https://doi.org/10.1109/MSP.2020.2978507
[56]
Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2022. Efficient Transformers: A Survey. ACM Comput. Surv. 55, 6, Article 109 (dec 2022), 28 pages. https://doi.org/10.1145/3530811
[57]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems (NeurIPS), Vol. 30.
[58]
Chao Wang, Feng Lin, Zhongjie Ba, Fan Zhang, Wenyao Xu, and Kui Ren. 2022. Wavesdropper: Through-wall Word Detection of Human Speech via Commercial mmWave Devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 2 (2022), 1--26.
[59]
Chao Wang, Feng Lin, Tiantian Liu, Ziwei Liu, Yijie Shen, Zhongjie Ba, Li Lu, Wenyao Xu, and Kui Ren. 2022. mmphone: Acoustic eavesdropping on loudspeakers via mmwave-characterized piezoelectric effect. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications. IEEE, 820--829.
[60]
Chao Wang, Feng Lin, Tiantian Liu, Kaidi Zheng, Zhibo Wang, Zhengxiong Li, Ming-Chun Huang, Wenyao Xu, and Kui Ren. 2022. mmEve: eavesdropping on smartphone's earpiece via COTS mmWave device. In Proceedings of the 28th Annual International Conference on Mobile Computing And Networking. 338--351.
[61]
Chengyi Wang, Yu Wu, Liang Lu, Shujie Liu, Jinyu Li, Guoli Ye, and Ming Zhou. 2020. Low Latency End-to-End Streaming Speech Recognition with a Scout Network. In Proc. Interspeech 2020. 2112--2116.
[62]
DeLiang Wang and Jitong Chen. 2018. Supervised speech separation based on deep learning: An overview. IEEE/ACM Transactions on Audio, Speech, and Language Processing 26, 10 (2018), 1702--1726.
[63]
Heming Wang and Deliang Wang. 2020. Time-frequency loss for CNN based speech super-resolution. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 861--865.
[64]
Pete Warden. 2018. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209 (2018).
[65]
Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai. 2018. ESPnet: End-to-End Speech Processing Toolkit. In Proc. Interspeech 2018. 2207--2211.
[66]
Teng Wei, Shu Wang, Anfu Zhou, and Xinyu Zhang. 2015. Acoustic eavesdropping through wireless vibrometry. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking. 130--141.
[67]
Chenhan Xu, Zhengxiong Li, Hanbin Zhang, Aditya Singh Rathore, Huining Li, Chen Song, Kun Wang, and Wenyao Xu. 2019. WaveEar: Exploring a MmWave-Based Noise-Resistant Speech Sensing for Voice-User Interface. In Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '19). New York, NY, USA, 14--26.
[68]
Hongfei Xue, Yan Ju, Chenglin Miao, Yijiang Wang, Shiyang Wang, Aidong Zhang, and Lu Su. 2021. mmMesh: Towards 3D Real-Time Dynamic Human Mesh Construction Using Millimeter-Wave. In Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '21). New York, NY, USA, 269--282. https://doi.org/10.1145/3458864.3467679
[69]
Zihui Xue, Zhengqi Gao, Sucheng Ren, and Hang Zhao. 2022. The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation. In The Eleventh International Conference on Learning Representations (ICLR).
[70]
Chuanguang Yang, Helong Zhou, Zhulin An, Xue Jiang, Yongjun Xu, and Qian Zhang. 2022. Cross-image relational knowledge distillation for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 12319--12328.
[71]
Jihan Yang, Shaoshuai Shi, Runyu Ding, Zhe Wang, and Xiaojuan Qi. 2022. Towards efficient 3d object detection with knowledge distillation. Advances in Neural Information Processing Systems (NeurIPS) 35 (2022), 21300--21313.
[72]
Sergey Zagoruyko and Nikos Komodakis. 2017. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
[73]
Jia Zhang, Rui Xi, Yuan He, Yimiao Sun, Xiuzhen Guo, Weiguo Wang, Xin Na, Yunhao Liu, Zhenguo Shi, and Tao Gu. 2023. A Survey of mmWave-Based Human Sensing: Technology, Platforms and Applications. IEEE Communications Surveys Tutorials (2023), 1--1. https://doi.org/10.1109/COMST.2023.3298300
[74]
Jia Zhang, Yinian Zhou, Rui Xi, Shuai Li, Junchen Guo, and Yuan He. 2022. AmbiEar: mmWave Based Voice Recognition in NLoS Scenarios. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1--25.
[75]
Shujie Zhang, Tianyue Zheng, Zhe Chen, and Jun Luo. 2022. Can We Obtain Fine-grained Heartbeat Waveform via Contact-free RF-sensing?. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications. IEEE, 1759--1768.
[76]
Mingmin Zhao, Tianhong Li, Mohammad Abu Alsheikh, Yonglong Tian, Hang Zhao, Antonio Torralba, and Dina Katabi. 2018. Throughwall human pose estimation using radio signals. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7356--7365.
[77]
Mingmin Zhao, Yonglong Tian, Hang Zhao, Mohammad Abu Alsheikh, Tianhong Li, Rumen Hristov, Zachary Kabelac, Dina Katabi, and Antonio Torralba. 2018. RF-based 3D skeletons. In Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication (SIGCOMM). 267--281.
[78]
Running Zhao, Xiaolin Ma, Xinhua Liu, and Jian Liu. 2020. An end-to-end network for continuous human motion recognition via radar radios. IEEE Sensors Journal 21, 5 (2020), 6487--6496.
[79]
Running Zhao, Jiangtao Yu, Tingle Li, Hang Zhao, and Edith C. H. Ngai. 2022. Radio2Speech: High Quality Speech Recovery from Radio Frequency Signals. In Proc. Interspeech 2022. 4666--4670.

Cited By

View all
  • (2024)Evaluate Closed-Loop, Mindless Intervention in-the-Wild: A Micro-Randomized Trial on Offset Heart Rate BiofeedbackCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678373(307-312)Online publication date: 5-Oct-2024
  • (2024)Investigating the Design Space of Affective Touch on the Forearm AreaAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686320(1-3)Online publication date: 13-Oct-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 7, Issue 3
September 2023
1734 pages
EISSN:2474-9567
DOI:10.1145/3626192
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 September 2023
Published in IMWUT Volume 7, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Knowledge Distillation
  2. Millimeter Wave
  3. Radar Sensing
  4. Streaming Speech Recognition
  5. Wireless Sensing

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)232
  • Downloads (Last 6 weeks)16
Reflects downloads up to 17 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Evaluate Closed-Loop, Mindless Intervention in-the-Wild: A Micro-Randomized Trial on Offset Heart Rate BiofeedbackCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678373(307-312)Online publication date: 5-Oct-2024
  • (2024)Investigating the Design Space of Affective Touch on the Forearm AreaAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686320(1-3)Online publication date: 13-Oct-2024

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media