skip to main content
10.1145/3560905.3568504acmconferencesArticle/Chapter ViewAbstractPublication PagessensysConference Proceedingsconference-collections
research-article
Open access

Capricorn: Towards Real-Time Rich Scene Analysis Using RF-Vision Sensor Fusion

Published: 24 January 2023 Publication History

Abstract

Video scene analysis is a well-investigated area where researchers have devoted efforts to detect and classify people and objects in the scene. However, real-life scenes are more complex: the intrinsic states of the objects (e.g., machine operating states or human vital signals) are often overlooked by vision-based scene analysis. Recent work has proposed a radio frequency (RF) sensing technique, wireless vibrometry, that employs wireless signals to sense subtle vibrations from the objects and infer their internal states. We envision that the combination of video scene analysis with wireless vibrometry form a more comprehensive understanding of the scene, namely "rich scene analysis". However, the RF sensors used in wireless vibrometry only provide time series, and it is challenging to associate these time series data with multiple real-world objects. We propose a real-time RF-vision sensor fusion system, Capricorn, that efficiently builds a cross-modal correspondence between visual pixels and RF time series to better understand the complex natures of a scene. The vision sensors in Capricorn model the surrounding environment in 3D and obtain the distances of different objects. In the RF domain, the distance is proportional to the signal time-of-flight (ToF), and we can leverage the ToF to separate the RF time series corresponding to each object. The RF-vision sensor fusion in Capricorn brings multiple benefits. The vision sensors provide environmental contexts to guide the processing of RF data, which helps us select the most appropriate algorithms and models. Meanwhile, the RF sensor yields additional information that is originally invisible to vision sensors, providing insight into objects' intrinsic states. Our extensive evaluations show that Capricorn real-timely monitors multiple appliances' operating status with an accuracy of 97%+ and recovers vital signals like respirations from multiple people. A video (https://youtu.be/b-5nav3Fi78) demonstrates the capability of Capricorn.

References

[1]
Qaisar Abbas, Mostafa EA Ibrahim, and M Arfan Jaffar. 2018. Video scene analysis: an overview and challenges on deep learning algorithms. Multimedia Tools and Applications 77, 16 (2018), 20415--20453.
[2]
Adeel Ahmad, June Chul Roh, Dan Wang, and Aish Dubey. 2018. Vital signs monitoring of multiple people using a FMCW millimeter-wave sensor. In 2018 IEEE Radar Conference (RadarConf18). IEEE, 1450--1455.
[3]
Novelda AS. 2020. XeThru X4 Phase Noise Correction. https://github.com/novelda/Legacy-Documentation/blob/master/Application-Notes/XTAN-14_XeThru_X4_Phase_Noise_Correction_rev_a.pdf. Accessed: 2020-05-28.
[4]
Kim E Barrett, Scott Boitano, Susan M Barman, and Heddwen L Brooks. 2010. Ganong's review of medical physiology twenty. (2010).
[5]
Alex Bewley, Zongyuan Ge, Lionel Ott, Fabio Ramos, and Ben Upcroft. 2016. Simple online and realtime tracking. In 2016 IEEE international conference on image processing (ICIP). IEEE, 3464--3468.
[6]
Mario Bijelic, Tobias Gruber, Fahim Mannan, Florian Kraus, Werner Ritter, Klaus Dietmayer, and Felix Heide. 2020. Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11682--11692.
[7]
Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. 2020. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020).
[8]
Navaneeth Bodla, Bharat Singh, Rama Chellappa, and Larry S Davis. 2017. Soft-NMS-improving object detection with one line of code. In Proceedings of the IEEE international conference on computer vision. 5561--5569.
[9]
Peide Cai, Sukai Wang, Yuxiang Sun, and Ming Liu. 2020. Probabilistic end-to-end vehicle navigation in complex dynamic environments with multimodal sensor fusion. IEEE Robotics and Automation Letters 5, 3 (2020), 4218--4224.
[10]
Simon Chadwick, Will Maddern, and Paul Newman. 2019. Distant vehicle detection using radar and vision. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 8311--8317.
[11]
Hua-I Chang, Chieh Chien, James Y Xu, and Greg J Pottie. 2013. Context-guided universal hybrid decision tree for activity classification. In 2013 IEEE International Conference on Body Sensor Networks. IEEE, 1--6.
[12]
Zhe Chen, Tianyue Zheng, Chao Cai, and Jun Luo. 2021. MoVi-Fi: motion-robust vital signs waveform recovery via deep interpreted RF sensing. In Proceedings of the 27th Annual International Conference on Mobile Computing and Networking. 392--405.
[13]
Zhe Chen, Tianyue Zheng, and Jun Luo. 2021. Octopus: a practical and versatile wideband MIMO sensing platform. In Proceedings of the 27th Annual International Conference on Mobile Computing and Networking. 601--614.
[14]
Abe Davis, Michael Rubinstein, Neal Wadhwa, Gautham J Mysore, Fredo Durand, and William T Freeman. 2014. The visual microphone: Passive recovery of sound from video. (2014).
[15]
HARK development team. 2022. HARK Document Version 3.3.0. (Revision: 9509) : GHDSS. https://www.hark.jp/document/hark-document-en/subsec-GHDSS.html. (Accessed on 10/07/2022).
[16]
Changquan Ding, Hang Liu, and Hengyu Li. 2019. Stitching of depth and color images from multiple RGB-D sensors for extended field of view. International Journal of Advanced Robotic Systems 16, 3 (2019).
[17]
Konstantin Dragomiretskiy and Dominique Zosso. 2013. Variational mode decomposition. IEEE transactions on signal processing 62, 3 (2013), 531--544.
[18]
Rudresh Dwivedi and Somnath Dey. 2019. Score-level fusion for cancelable multi-biometric verification. Pattern Recognition Letters 126 (2019), 58--67.
[19]
Muhammad Ehatisham-Ul-Haq, Ali Javed, Muhammad Awais Azam, Hafiz MA Malik, Aun Irtaza, Ik Hyun Lee, and Muhammad Tariq Mahmood. 2019. Robust human activity recognition using multimodal feature-level fusion. IEEE Access 7 (2019), 60736--60751.
[20]
Mica R Endsley. 2001. Designing for situation awareness in complex systems. In Proceedings of the Second International Workshop on symbiosis of humans, artifacts and environment. 1--14.
[21]
Enrique J Fernandez-Sanchez, Javier Diaz, and Eduardo Ros. 2013. Background subtraction based on color and depth using active sensors. Sensors 13, 7 (2013).
[22]
Enrique J Fernandez-Sanchez, Leonardo Rubio, Javier Diaz, and Eduardo Ros. 2014. Background subtraction model based on color and depth cues. Machine vision and applications 25, 5 (2014).
[23]
Jose L Herrera, Carlos R Del-Blanco, and Narciso Garcia. 2018. Automatic depth extraction from 2D images using a cluster-based learning framework. IEEE Transactions on Image Processing 27, 7 (2018).
[24]
Zhiyu Huang, Chen Lv, Yang Xing, and Jingda Wu. 2020. Multi-modal sensor fusion-based deep neural network for end-to-end autonomous driving with scene understanding. IEEE Sensors Journal 21, 10 (2020), 11781--11790.
[25]
Shekh MM Islam, Naoyuki Motoyama, Sergio Pacheco, and Victor M Lubecke. 2020. Non-contact vital signs monitoring for multiple subjects using a millimeter wave FMCW automotive radar. In 2020 IEEE/MTT-S International Microwave Symposium (IMS). IEEE, 783--786.
[26]
Zhenhua Jia, Musaab Alaziz, Xiang Chi, Richard E Howard, Yanyong Zhang, Pei Zhang, Wade Trappe, Anand Sivasubramaniam, and Ning An. 2016. HB-phone: a bed-mounted geophone-based heartbeat monitoring system. In 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN). IEEE, 1--12.
[27]
Chengkun Jiang, Junchen Guo, Yuan He, Meng Jin, Shuai Li, and Yunhao Liu. 2020. mmVib: micrometer-level vibration measurement with mmwave radar. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking. 1--13.
[28]
Glenn Jocher, Alex Stoken, Ayush Chaurasia, Jirka Borovec, NanoCode012, TaoXie, Yonghye Kwon, Kalen Michael, Liu Changyu, Jiacong Fang, Abhiram V, Laughing, tkianai, yxNONG, Piotr Skalski, Adam Hogan, Jebastin Nadar, imyhxy, Lorenzo Mammana, AlexWang1900, Cristi Fati, Diego Montes, Jan Hajek, Laurentiu Diaconu, Mai Thanh Minh, Marc, albinxavi, fatih, oleg, and wanghaoyang0106. 2021. ultralytics/yolov5: v6.0 - YOLOv5n 'Nano' models, Roboflow integration, TensorFlow export, OpenCV DNN support.
[29]
Daejun Kang and Dongsuk Kum. 2020. Camera and radar sensor fusion for robust vehicle localization via vehicle part localization. IEEE Access 8 (2020), 75223--75236.
[30]
Min-Sung Kim, Raza Haider, Gyu-Jung Cho, Chul-Hwan Kim, Chung-Yuen Won, and Jong-Seo Chai. 2019. Comprehensive review of islanding detection methods for distributed generation systems. Energies 12, 5 (2019), 837.
[31]
Adam Krasuski, Andrzej Jankowski, Andrzej Skowron, and Dominik Slezak. 2013. From sensory data to decision making: A perspective on supporting a fire commander. In 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), Vol. 3. IEEE, 229--236.
[32]
Mayank Kumar, Ashok Veeraraghavan, and Ashutosh Sabharwal. 2015. DistancePPG: Robust non-contact vital signs monitoring using a camera. Biomedical optics express 6, 5 (2015), 1565--1588.
[33]
Huining Li, Chenhan Xu, Aditya Singh Rathore, Zhengxiong Li, Hanbin Zhang, Chen Song, Kun Wang, Lu Su, Feng Lin, Kui Ren, et al. 2021. VocalPrint: A mmWave-based Unmediated Vocal Sensing System for Secure Authentication. IEEE Transactions on Mobile Computing (2021).
[34]
Minle Li, Yihua Hu, Nanxiang Zhao, and Qishu Qian. 2019. One-Stage Multi-Sensor Data Fusion Convolutional Neural Network for 3D Object Detection. Sensors 19, 6 (2019), 1434.
[35]
Ping Li, Zhenlin An, Lei Yang, and Panlong Yang. 2019. Towards physical-layer vibration sensing with rfids. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 892--900.
[36]
Jialin Liu, Dong Li, Lei Wang, Fusang Zhang, and Jie Xiong. 2022. Enabling Contact-free Acoustic Sensing under Device Motion. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1--27.
[37]
Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3431--3440.
[38]
Ren C Luo, Tung Y Lin, and Kuo L Su. 2009. Multisensor based security robot system for intelligent building. Robotics and autonomous systems 57, 3 (2009), 330--338.
[39]
Lucia Maddalena and Alfredo Petrosino. 2017. Exploiting color and depth for background subtraction. In International Conference on Image Analysis and Processing.
[40]
K Venu Madhav, M Raghu Ram, E Hari Krishna, Nagarjuna Reddy Komalla, and K Ashoka Reddy. 2011. Estimation of respiration rate from ECG, BP and PPG signals using empirical mode decomposition. In 2011 IEEE International Instrumentation and Measurement Technology Conference. IEEE, 1--4.
[41]
Emanuela Marasco, Ayman Abaza, and Bojan Cukic. 2015. Why rank-level fusion? And what is the impact of image quality? International Journal of Big Data Intelligence 2, 2 (2015), 106--116.
[42]
Armin MASOUMIAN, David GF MAREI, Saddam ABDULWAHAB, Julián CRISTIANO, Domenec PUIG, and Hatem A RASHWAN. 2021. Absolute Distance Prediction Based on Deep Learning Object Detection and Monocular Depth Estimation Models. (2021).
[43]
Alican Mertan, Damien Jade Duff, and Gozde Unal. 2021. Single Image Depth Estimation: An Overview. arXiv preprint arXiv:2104.06456 (2021).
[44]
Mircea Paul Muresan, Ion Giosan, and Sergiu Nedevschi. 2020. Stabilization and validation of 3D object position using multimodal sensor fusion and semantic segmentation. Sensors 20, 4 (2020), 1110.
[45]
Mojtaba Nazari and Sayed Mahmoud Sakhaei. 2017. Variational mode extraction: A new efficient method to derive respiratory signals from ECG. IEEE journal of biomedical and health informatics 22, 4 (2017), 1059--1067.
[46]
Felix Nobis, Maximilian Geisslinger, Markus Weber, Johannes Betz, and Markus Lienkamp. 2019. A deep learning-based radar and camera sensor fusion architecture for object detection. In 2019 Sensor Data Fusion: Trends, Solutions, Applications (SDF). IEEE, 1--7.
[47]
Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. 2015. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision. 1520--1528.
[48]
Edwin Olson. 2011. AprilTag: A robust and flexible visual fiducial system. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 3400--3407.
[49]
Muhammed Zahid Ozturk, Chenshu Wu, Beibei Wang, and KJ Liu. 2021. RadioMic: Sound Sensing via mmWave Signals. arXiv preprint arXiv:2108.03164 (2021).
[50]
Yashaswini Prathivadi, Jian Wu, Terrell R Bennett, and Roozbeh Jafari. 2014. Robust activity recognition using wearable IMU sensors. In SENSORS, 2014 IEEE. IEEE, 486--489.
[51]
Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition. 779--788.
[52]
Joseph Redmon and Ali Farhadi. 2017. YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7263--7271.
[53]
Respeaker. 2019. ReSpeaker 4 Mic Array for Raspberry Pi. (2019).
[54]
Syed Aziz Shah, Ahsen Tahir, Jawad Ahmad, Adnan Zahid, Haris Pervaiz, Syed Yaseen Shah, Aboajeila Milad Abdulhadi Ashleibta, Aamir Hasanali, Shadan Khattak, and Qammer H Abbasi. 2020. Sensor fusion for identification of freezing of gait episodes using Wi-Fi and radar imaging. IEEE Sensors Journal 20, 23 (2020), 14410--14422.
[55]
Hongming Shen, Chen Xu, Yongjie Yang, Ling Sun, Zhitian Cai, Lin Bai, Edward Clancy, and Xinming Huang. 2018. Respiration and heartbeat rates measurement based on autocorrelation using IR-UWB radar. IEEE Transactions on Circuits and Systems II: Express Briefs 65, 10 (2018), 1470--1474.
[56]
Talha Ahmad Siddiqui, Rishi Madhok, and Matthew O'Toole. 2020. An extensible multi-sensor fusion framework for 3d imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 1008--1009.
[57]
Stanford Artificial Intelligence Laboratory et al. 2018. Robotic Operating System. https://www.ros.org
[58]
Texas Instruments. 2018. IWR1443 datasheet. https://www.ti.com/document-viewer/IWR1443/datasheet/power-consumption-summary-x7469#x7469. (Accessed on 11/04/2021).
[59]
Asad Vakil, Jenny Liu, Peter Zulch, Erik Blasch, Robert Ewing, and Jia Li. 2021. A survey of multimodal sensor fusion for passive RF and EO information integration. IEEE Aerospace and Electronic Systems Magazine 36, 7 (2021), 44--61.
[60]
Neal Wadhwa, Michael Rubinstein, Frédo Durand, and William T Freeman. 2013. Phase-based video motion processing. ACM Transactions on Graphics (TOG) 32, 4 (2013), 1--10.
[61]
Wei Wang, Alex X Liu, and Muhammad Shahzad. 2016. Gait recognition using wifi signals. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 363--373.
[62]
Wei Wang, Alex X Liu, Muhammad Shahzad, Kang Ling, and Sanglu Lu. 2015. Understanding and modeling of wifi signal based human activity recognition. In Proceedings of the 21st annual international conference on mobile computing and networking. 65--76.
[63]
Xuyu Wang, Chao Yang, and Shiwen Mao. 2017. PhaseBeat: Exploiting CSI phase data for vital sign monitoring with commodity WiFi devices. In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). IEEE, 1230--1239.
[64]
Yikai Wang, Wenbing Huang, Fuchun Sun, Tingyang Xu, Yu Rong, and Junzhou Huang. 2020. Deep multimodal fusion by channel exchanging. Advances in Neural Information Processing Systems 33 (2020).
[65]
Ziqi Wang, Zhe Chen, Akash Deep Singh, Luis Garcia, Jun Luo, and Mani B Srivastava. 2020. UWHear: through-wall extraction and separation of audio vibrations using wireless signals. In Proceedings of the 18th Conference on Embedded Networked Sensor Systems. 1--14.
[66]
Björn Waske and Sebastian van der Linden. 2008. Classifying multilevel imagery from SAR and optical sensors by decision fusion. IEEE Transactions on Geoscience and Remote Sensing 46, 5 (2008), 1457--1466.
[67]
Teng Wei, Shu Wang, Anfu Zhou, and Xinyu Zhang. 2015. Acoustic eavesdropping through wireless vibrometry. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking. 130--141.
[68]
Dustin T Weiler, Stefanie O Villajuan, Laura Edkins, Sean Cleary, and Jason J Saleem. 2017. Wearable heart rate monitor technology accuracy in research: a comparative study between PPG and ECG technology. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 61. SAGE Publications Sage CA: Los Angeles, CA, 1292--1296.
[69]
Xethru. 2019. X4M02 Datasheet. http://laonuri.techyneeti.com/wp-content/uploads/2019/02/X4M02_DATASHEET.pdf. (Accessed on 11/04/2021).
[70]
Yang Xin, Lingshuang Kong, Zhi Liu, Chunhua Wang, Hongliang Zhu, Mingcheng Gao, Chensu Zhao, and Xiaoke Xu. 2018. Multimodal feature-level fusion for biometrics identification system on IoMT platform. IEEE Access 6 (2018), 21418--21426.
[71]
Tianwei Xing, Luis Garcia, Marc Roig Vilamala, Federico Cerutti, Lance Kaplan, Alun Preece, and Mani Srivastava. 2020. Neuroplex: learning to detect complex events in sensor networks through knowledge injection. In Proceedings of the 18th Conference on Embedded Networked Sensor Systems. 489--502.
[72]
Chenhan Xu, Zhengxiong Li, Hanbin Zhang, Aditya Singh Rathore, Huining Li, Chen Song, Kun Wang, and Wenyao Xu. 2019. Waveear: Exploring a mmwave-based noise-resistant speech sensing for voice-user interface. In Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services. 14--26.
[73]
Hongfei Xue, Wenjun Jiang, Chenglin Miao, Ye Yuan, Fenglong Ma, Xin Ma, Yijiang Wang, Shuochao Yao, Wenyao Xu, Aidong Zhang, et al. 2019. Deepfusion: A deep learning framework for the fusion of heterogeneous sensory data. In Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing. 151--160.
[74]
Lei Yang, Yao Li, Qiongzheng Lin, Huanyu Jia, Xiang-Yang Li, and Yunhao Liu. 2017. Tagbeat: Sensing mechanical vibration period with cots rfid systems. IEEE/ACM transactions on networking 25, 6 (2017), 3823--3835.
[75]
Yanni Yang, Jiannong Cao, Xiulong Liu, and Xuefeng Liu. 2019. Multi-breath: Separate respiration monitoring for multiple persons with UWB radar. In 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), Vol. 1. IEEE, 840--849.
[76]
Shichao Yue, Hao He, Hao Wang, Hariharan Rahul, and Dina Katabi. 2018. Extracting multi-person respiration from entangled rf signals. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 2 (2018), 1--22.
[77]
Youwei Zeng, Dan Wu, Jie Xiong, Jinyi Liu, Zhaopeng Liu, and Daqing Zhang. 2020. MultiSense: Enabling multi-person respiration sensing with commodity wifi. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1--29.
[78]
Fusang Zhang, Daqing Zhang, Jie Xiong, Hao Wang, Kai Niu, Beihong Jin, and Yuxiang Wang. 2018. From fresnel diffraction model to fine-grained human respiration sensing with commodity wi-fi devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 1 (2018), 1--23.
[79]
Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, and Amit Agrawal. 2018. Context encoding for semantic segmentation. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 7151--7160.
[80]
Shujie Zhang, Tianyue Zheng, Zhe Chen, and Jun Luo. 2022. Can We Obtain Finegrained Heartbeat Waveform via Contact-free RF-sensing?. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications. IEEE, 1759--1768.
[81]
Yang Zhang, Gierad Laput, and Chris Harrison. 2018. Vibrosight: Long-range vibrometry for smart environment sensing. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 225--236.
[82]
Yang Zhang, Sven Mayer, Jesse T Gonzalez, and Chris Harrison. 2021. Vibrosight++: City-Scale Sensing Using Existing Retroreflective Signs and Markers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--14.
[83]
Tianyue Zheng, Zhe Chen, Chao Cai, Jun Luo, and Xu Zhang. 2020. V2iFi: in-Vehicle Vital Sign Monitoring via Compact RF Sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 2 (2020), 1--27.
[84]
Tianyue Zheng, Zhe Chen, Shujie Zhang, Chao Cai, and Jun Luo. 2021. MoRe-Fi: Motion-robust and Fine-grained Respiration Monitoring via Deep-Learning UWB Radar. In Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. 111--124.
[85]
Tianyue Zheng, Zhe Chen, Shujie Zhang, and Jun Luo. 2022. Catch Your Breath: Simultaneous RF Tracking and Respiration Monitoring with Radar Pairs. IEEE Transactions on Mobile Computing (2022).

Cited By

View all
  • (2024)HomeOSD: Appliance Operating-Status Detection Using mmWave RadarSensors10.3390/s2409291124:9(2911)Online publication date: 2-May-2024
  • (2024)TinyNS: Platform-aware Neurosymbolic Auto Tiny Machine LearningACM Transactions on Embedded Computing Systems10.1145/360317123:3(1-48)Online publication date: 11-May-2024
  • (2023)Short: RF-Q: Unsupervised Signal Quality Assessment for Robust RF-based Respiration MonitoringProceedings of the 8th ACM/IEEE International Conference on Connected Health: Applications, Systems and Engineering Technologies10.1145/3580252.3586988(158-162)Online publication date: 21-Jun-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SenSys '22: Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems
November 2022
1280 pages
ISBN:9781450398862
DOI:10.1145/3560905
This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 January 2023

Check for updates

Qualifiers

  • Research-article

Funding Sources

Conference

Acceptance Rates

SenSys '22 Paper Acceptance Rate 52 of 187 submissions, 28%;
Overall Acceptance Rate 198 of 990 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)446
  • Downloads (Last 6 weeks)44
Reflects downloads up to 22 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)HomeOSD: Appliance Operating-Status Detection Using mmWave RadarSensors10.3390/s2409291124:9(2911)Online publication date: 2-May-2024
  • (2024)TinyNS: Platform-aware Neurosymbolic Auto Tiny Machine LearningACM Transactions on Embedded Computing Systems10.1145/360317123:3(1-48)Online publication date: 11-May-2024
  • (2023)Short: RF-Q: Unsupervised Signal Quality Assessment for Robust RF-based Respiration MonitoringProceedings of the 8th ACM/IEEE International Conference on Connected Health: Applications, Systems and Engineering Technologies10.1145/3580252.3586988(158-162)Online publication date: 21-Jun-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media