skip to main content
research-article

Midas: Generating mmWave Radar Data from Videos for Training Pervasive and Privacy-preserving Human Sensing Tasks

Published: 28 March 2023 Publication History

Abstract

Millimeter wave radar is a promising sensing modality for enabling pervasive and privacy-preserving human sensing. However, the lack of large-scale radar datasets limits the potential of training deep learning models to achieve generalization and robustness. To close this gap, we resort to designing a software pipeline that leverages wealthy video repositories to generate synthetic radar data, but it confronts key challenges including i) multipath reflection and attenuation of radar signals among multiple humans, ii) unconvertible generated data leading to poor generality for various applications, and iii) the class-imbalance issue of videos leading to low model stability. To this end, we design Midas to generate realistic, convertible radar data from videos via two components: (i) a data generation network (DG-Net) combines several key modules, depth prediction, human mesh fitting and multi-human reflection model, to simulate the multipath reflection and attenuation of radar signals to output convertible coarse radar data, followed by a Transformer model to generate realistic radar data; (ii) a variant Siamese network (VS-Net) selects key video clips to eliminate data redundancy for addressing the class-imbalance issue. We implement and evaluate Midas with video data from various external data sources and real-world radar data, demonstrating its great advantages over the state-of-the-art approach for both activity recognition and object detection tasks.

References

[1]
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijaya-narasimhan. 2016. Youtube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016).
[2]
Rebecca Adaimi, Howard Yong, and Edison Thomaz. 2021. Ok Google, What Am I Doing? Acoustic Activity Recognition Bounded by Conversational Assistant Interactions. Proc. of ACM IMWUT 5, 1 (2021), 1--24.
[3]
Karan Ahuja, Yue Jiang, Mayank Goel, and Chris Harrison. 2021. Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition. In Proc. of CHI. 1--10.
[4]
Jillian Beardwood, John H Halton, and John Michael Hammersley. 1959. The shortest path through many points. In Mathematical Proceedings of the Cambridge Philosophical Society, Vol. 55. 299--327.
[5]
Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. 2016. Fully-convolutional siamese networks for object tracking. In Proc. of ECCV. 850--865.
[6]
Sejal Bhalla, Mayank Goel, and Rushil Khurana. 2021. IMU2Doppler: Cross-Modal Domain Adaptation for Doppler-based Activity Recognition Using IMU Data. Proc. of ACM IMWUT 5, 4 (2021), 1--20.
[7]
Shariq Farooq Bhat, Ibraheem Alhashim, and Peter Wonka. 2021. Adabins: Depth estimation using adaptive bins. In Proc. of IEEE CVPR. 4009--4018.
[8]
Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. 2018. A systematic study of the class imbalance problem in convolutional neural networks. Neural networks 106 (2018), 249--259.
[9]
Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In Proc. of IEEE CVPR. 961--970.
[10]
Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In Proc. of IEEE CVPR. 7291--7299.
[11]
Chenglizhao Chen, Guotao Wang, Chong Peng, Yuming Fang, Dingwen Zhang, and Hong Qin. 2021. Exploring rich and efficient spatial temporal interactions for real-time video salient object detection. IEEE TIP 30 (2021), 3995--4007.
[12]
Qingchao Chen, Bo Tan, Kevin Chetty, and Karl Woodbridge. 2016. Activity recognition based on micro-Doppler signature with in-home Wi-Fi. In Proc. of IEEE Healthcom. 1--6.
[13]
Wenqiang Chen, Shupei Lin, Elizabeth Thompson, and John Stankovic. 2021. SenseCollect: We Need Efficient Ways to Collect On-body Sensor-based Human Activity Data! Proc. of ACM IMWUT 5, 3 (2021), 1--27.
[14]
Hongsuk Choi, Gyeongsik Moon, Ju Yong Chang, and Kyoung Mu Lee. 2021. Beyond static features for temporally consistent 3d human pose and shape from a video. In Proc. of IEEE CVPR. 1964--1973.
[15]
Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In Proc. of IEEE CVPR, Vol. 1. 539--546.
[16]
Kaikai Deng, Dong Zhao, Qiaoyue Han, Zihan Zhang, Shuyue Wang, and Huadong Ma. 2022. Global-Local Feature Enhancement Network for Robust Object Detection using mmWave Radar and Camera. In Proc. of IEEE ICASSP. 4708--4712.
[17]
Ha Manh Do, Karla Conn Welch, and Weihua Sheng. 2021. Soham: A sound-based human activity monitoring framework for home service robots. IEEE TASAE (2021).
[18]
Wenbin Du, Yali Wang, and Yu Qiao. 2017. Rpan: An end-to-end recurrent pose-attention network for action recognition in videos. In Proc. of IEEE ICCV. 3725--3734.
[19]
Baris Erol and Sevgi Zubeyde Gurbuz. 2015. A kinect-based human micro-doppler simulator. IEEE Aerospace and Electronic Systems Magazine 30, 5 (2015), 6--17.
[20]
Baris Erol, Sevgi Z Gurbuz, and Moeness G Amin. 2019. GAN-based synthetic radar micro-Doppler augmentations for improved human activity recognition. In Proc. of IEEE RadarConf. 1--5.
[21]
Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proc. of ACM SIGKDD, Vol. 96. 226--231.
[22]
Jocher Glenn, Stoken Alex, and et al. Jirka Borovec. 2022. ultralytics/yolov5. Retrieved 2022 from https://github.com/ultralytics/yolov5
[23]
Google. 2020. Google Project Soli. Retrieved 2020 from https://atap.google.com/soli/
[24]
Shanyan Guan, Jingwei Xu, Michelle Z He, Yunbo Wang, Bingbing Ni, and Xiaokang Yang. 2021. Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation. IEEE TPAMI (2021).
[25]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proc. of IEEE CVPR. 770--778.
[26]
Lu He, Qianyu Zhou, Xiangtai Li, Li Niu, Guangliang Cheng, Xiao Li, Wenxuan Liu, Yunhai Tong, Lizhuang Ma, and Liqing Zhang. 2021. End-to-end video object detection with spatial-temporal transformers. In Proc. of ACM MM. 1507--1516.
[27]
Ma Huadong, Liu Liang, and Luo Hong. 2021. Multimedia Sensor Networks. Springer.
[28]
Huawei. 2022. Huawei's whole-house smart strategy upgrade allows users to enjoy a highly intelligent interactive experience. Retrieved 2022 from https://inf.news/en/home/677b28ebe5f083c7b2bdd779566c5fe0.html
[29]
Lam Huynh, Phong Nguyen-Ha, Jiri Matas, Esa Rahtu, and Janne Heikkilä. 2020. Guiding monocular depth estimation using depth-attention volume. In Proc. of ECCV. 581--597.
[30]
Texas Instruments. 2019. TI IWR1443 single-chip 76-GHz to 81-GHz mmWave sensor evaluation module. Retrieved 2019 from https://www.ti.com/tool/IWR1443BOOST
[31]
Texas Instruments. 2020. DCA1000EVM: Real-time Data-Capture Adapter for Radar Sensing Evaluation Module. Retrieved 2020 from https://www.ti.com/tool/DCA1000EVM
[32]
Youngkyoon Jang, Ikbeom Jeon, Tae-Kyun Kim, and Woontack Woo. 2016. Metaphoric hand gestures for orientation-aware VR object manipulation with an egocentric viewpoint. IEEE THMS 47, 1 (2016), 113--127.
[33]
Licheng Jiao, Ruohan Zhang, Fang Liu, Shuyuan Yang, Biao Hou, Lingling Li, and Xu Tang. 2021. New generation deep learning for video object detection: A survey. IEEE TNNLS (2021).
[34]
Tom Kenter, Alexey Borisov, and Maarten De Rijke. 2016. Siamese cbow: Optimizing word embeddings for sentence representations. Proc. of ACL.
[35]
Chanho Kim, Li Fuxin, Mazen Alotaibi, and James M Rehg. 2021. Discriminative appearance modeling with multi-track pooling for real-time multi-object tracking. In Proc. of IEEE CVPR. 9553--9562.
[36]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Proc. of ICLR (2014).
[37]
Muhammed Kocabas, Nikos Athanasiou, and Michael J Black. 2020. Vibe: Video inference for human body pose and shape estimation. In Proc. of IEEE CVPR. 5253--5263.
[38]
Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, and Thomas Serre. 2011. HMDB: a large video database for human motion recognition. In Proc. of IEEE ICCV. 2556--2563.
[39]
Heeseung Kwon, Manjin Kim, Suha Kwak, and Minsu Cho. 2021. Learning self-similarity in space and time as generalized motion for video action recognition. In Proc. of IEEE CVPR. 13065--13075.
[40]
Hyeokhyen Kwon, Catherine Tong, Harish Haresamudram, Yan Gao, Gregory D Abowd, Nicholas D Lane, and Thomas Ploetz. 2020. IMUTube: Automatic extraction of virtual on-body accelerometry from video for human activity recognition. Proc. of ACM IMWUT 4, 3 (2020), 1--29.
[41]
Hyeokhyen Kwon, Bingyao Wang, Gregory D Abowd, and Thomas Plötz. 2021. Approaching the real-world: Supporting activity recognition training with virtual imu data. Proc. of ACM IMWUT 5, 3 (2021), 1--32.
[42]
Ivan Laptev and Tony Lindeberg. 2004. Velocity adaptation of space-time interest points. In Proc. of IEEE ICPR, Vol. 1. 52--56.
[43]
Gierad Laput, Karan Ahuja, Mayank Goel, and Chris Harrison. 2018. Ubicoustics: Plug-and-play acoustic activity recognition. In Proc. of ACM UIST. 213--224.
[44]
Jin Han Lee, Myung-Kyu Han, Dong Wook Ko, and Il Hong Suh. 2019. From big to small: Multi-scale local planar guidance for monocular depth estimation. arXiv preprint arXiv:1907.10326 (2019).
[45]
Jiayi Li, Aman Shrestha, Julien Le Kernec, and Francesco Fioranelli. 2019. From Kinect skeleton data to hand gesture recognition with radar. The Journal of Engineering 2019, 20 (2019), 6914--6919.
[46]
Dawei Liang and Edison Thomaz. 2019. Audio-based activities of daily living (adl) recognition with large-scale acoustic embeddings from online videos. Proc. of ACM IMWUT 3, 1 (2019), 1--18.
[47]
Jingyun Liang, Jiezhang Cao, Yuchen Fan, Kai Zhang, Rakesh Ranjan, Yawei Li, Radu Timofte, and Luc Van Gool. 2022. VRT: A Video Restoration Transformer. arXiv preprint arXiv:2201.12288 (2022).
[48]
Juan-Ting Lin, Dengxin Dai, and Luc Van Gool. 2020. Depth estimation from monocular images and sparse radar data. In Proc. of IEEE IROS. 10233--10240.
[49]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Proc. of ECCV. 740--755.
[50]
Yier Lin and Julien Le Kernec. 2017. Performance analysis of classification algorithms for activity recognition using micro-Doppler feature. In Proc. of IEEE CIS. 480--483.
[51]
Haipeng Liu, Kening Cui, Kaiyuan Hu, Yuheng Wang, Anfu Zhou, Liang Liu, and Huadong Ma. 2022. mTransSee: Enabling Environment-Independent mmWave Sensing Based Gesture Recognition via Transfer Learning. Proc. of ACM IMWUT 6, 1 (2022), 1--28.
[52]
Haipeng Liu, Yuheng Wang, Anfu Zhou, Hanyue He, Wei Wang, Kunpeng Wang, Peilin Pan, Yixuan Lu, Liang Liu, and Huadong Ma. 2020. Real-time arm gesture recognition in smart home scenarios via millimeter wave sensing. Proc. of ACM IMWUT 4, 4 (2020), 1--28.
[53]
Jian Liu, Hongbo Liu, Yingying Chen, Yan Wang, and Chen Wang. 2019. Wireless sensing for human activity: A survey. IEEE Commun. Surv. Tutor. 22, 3 (2019), 1629--1645.
[54]
Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang. 2016. Spatio-temporal lstm with trust gates for 3d human action recognition. In Proc. of ECCV. 816--833.
[55]
Luyang Liu, Hongyu Li, and Marco Gruteser. 2019. Edge assisted real-time object detection for mobile augmented reality. In Proc. of ACM MobiCom. 1--16.
[56]
Miaomiao Liu, Xianzhong Ding, and Wan Du. 2020. Continuous, real-time object detection on mobile devices without offloading. In Proc. of IEEE ICDCS. 976--986.
[57]
Yongyi Lu, Cewu Lu, and Chi-Keung Tang. 2017. Online video object detection using association LSTM. In Proc. of IEEE ICCV. 2344--2352.
[58]
Jiayi Ma, Xingyu Jiang, Aoxiang Fan, Junjun Jiang, and Junchi Yan. 2021. Image matching from handcrafted to deep features: A survey. IJCV 129, 1 (2021), 23--79.
[59]
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao. 2021. Deep learning-based text classification: a comprehensive review. ACM Computing Surveys (CSUR) 54, 3 (2021), 1--40.
[60]
Gyeongsik Moon, Heeseung Kwon, Kyoung Mu Lee, and Minsu Cho. 2021. Integralaction: Pose-driven feature integration for robust human action recognition in videos. In Proc. of IEEE CVPR. 3339--3348.
[61]
Khan Muhammad, Tanveer Hussain, Javier Del Ser, Vasile Palade, and Victor Hugo C De Albuquerque. 2019. DeepReS: A deep learning-based video summarization strategy for resource-constrained industrial surveillance scenarios. IEEE TII 16, 9 (2019), 5938--5947.
[62]
Ramin Nabati and Hairong Qi. 2019. Rrpn: Radar region proposal network for object detection in autonomous vehicles. In Proc. of IEEE ICIP. 3093--3097.
[63]
Parth H Pathak, Xiaotao Feng, Pengfei Hu, and Prasant Mohapatra. 2015. Visible light communication, networking, and sensing: A survey, potential and challenges. IEEE Commun. Surv. Tutor. 17, 4 (2015), 2047--2077.
[64]
M Mahbubur Rahman, Sevgi Z Gurbuz, and Moeness G Amin. 2021. Physics-aware design of multi-branch gan for human rf micro-doppler signature synthesis. In Proc. of IEEE RadarConf. 1--6.
[65]
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proc. of NIPS. 91--99.
[66]
Mark A Richards. 2014. Fundamentals of radar signal processing. McGraw-Hill Education.
[67]
Mehmet S Seyfioglu, Baris Erol, Sevgi Z Gurbuz, and Moeness G Amin. 2018. Diversified radar micro-Doppler simulations as training data for deep residual neural networks. In Proc. of IEEE radarConf. 0612--0617.
[68]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[69]
Chon Hou Sio, Yu-Jen Ma, Hong-Han Shuai, Jun-Cheng Chen, and Wen-Huang Cheng. 2020. S2siamfc: Self-supervised fully convolutional siamese network for visual tracking. In Proc. of ACM MM. 1948--1957.
[70]
Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012).
[71]
Statista. 2020. Hours of video uploaded to YouTube every minute as of February. Retrieved 2020 from https://www.statista.com/statistics/259477/hours-of-video-uploaded-to-youtube-every-minute/
[72]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Proc. of NIPS 30 (2017).
[73]
Lin Wang and Daniel Roggen. 2019. Sound-based transportation mode recognition with smartphones. In Proc. of IEEE ICASSP. 930--934.
[74]
Yuheng Wang, Haipeng Liu, Kening Cui, Anfu Zhou, Wensheng Li, and Huadong Ma. 2021. m-Activity: Accurate and Real-Time Human Activity Recognition Via Millimeter Wave Radar. In Proc. of IEEE ICASSP. 8298--8302.
[75]
Turner Whitted. 2005. An improved illumination model for shaded display. In ACM Siggraph 2005 Courses. 4-es.
[76]
Chenshu Wu, Feng Zhang, Beibei Wang, and KJ Ray Liu. 2020. mmTrack: Passive multi-person localization using commodity millimeter wave radio. In Proc. of IEEE INFOCOM. 2400--2409.
[77]
Huicong Wu, Liang Xiao, and Zhihui Wei. 2021. Simultaneous video stabilization and rolling shutter removal. IEEE TIP 30 (2021), 4637--4652.
[78]
Jason Wu, Chris Harrison, Jeffrey P Bigham, and Gierad Laput. 2020. Automated Class Discovery and One-Shot Interactions for Acoustic Activity Recognition. In Proc. of CHI. 1--14.
[79]
Xin Yang, Jian Liu, Yingying Chen, Xiaonan Guo, and Yucheng Xie. 2020. MU-ID: Multi-user identification through gaits using millimeter wave radios. In Proc. of IEEE INFOCOM. 2589--2598.
[80]
Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, and George Toderici. 2015. Beyond short snippets: Deep networks for video classification. In Proc. of IEEE CVPR. 4694--4702.
[81]
Sergey Zagoruyko and Nikos Komodakis. 2015. Learning to compare image patches via convolutional neural networks. In Proc. of IEEE CVPR. 4353--4361.
[82]
Mohammadreza Zolfaghari, Gabriel L Oliveira, Nima Sedaghat, and Thomas Brox. 2017. Chained multi-stream networks exploiting pose, motion, and appearance for action classification and detection. In Proc. of IEEE ICCV. 2904--2913.

Cited By

View all
  • (2025)Video2mmPoint: Synthesizing mmWave Point Cloud Data From Videos for Gait RecognitionIEEE Sensors Journal10.1109/JSEN.2024.348383525:1(773-782)Online publication date: 1-Jan-2025
  • (2024)mmCLIP: Boosting mmWave-based Zero-shot HAR via Signal-Text AlignmentProceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems10.1145/3666025.3699331(184-197)Online publication date: 4-Nov-2024
  • (2024)LoCalProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314367:4(1-27)Online publication date: 12-Jan-2024
  • Show More Cited By

Index Terms

  1. Midas: Generating mmWave Radar Data from Videos for Training Pervasive and Privacy-preserving Human Sensing Tasks

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
      Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 7, Issue 1
      March 2023
      1243 pages
      EISSN:2474-9567
      DOI:10.1145/3589760
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 28 March 2023
      Published in IMWUT Volume 7, Issue 1

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. cross domain translation
      2. data generation
      3. human activity recognition
      4. radar sensing

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Funding Sources

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)432
      • Downloads (Last 6 weeks)40
      Reflects downloads up to 17 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Video2mmPoint: Synthesizing mmWave Point Cloud Data From Videos for Gait RecognitionIEEE Sensors Journal10.1109/JSEN.2024.348383525:1(773-782)Online publication date: 1-Jan-2025
      • (2024)mmCLIP: Boosting mmWave-based Zero-shot HAR via Signal-Text AlignmentProceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems10.1145/3666025.3699331(184-197)Online publication date: 4-Nov-2024
      • (2024)LoCalProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314367:4(1-27)Online publication date: 12-Jan-2024
      • (2024)SBRF: A Fine-Grained Radar Signal Generator for Human SensingIEEE Transactions on Mobile Computing10.1109/TMC.2024.342740623:12(13114-13130)Online publication date: Dec-2024
      • (2024)Text2Doppler: Generating Radar Micro–Doppler Signatures for Human Activity Recognition via Textual DescriptionsIEEE Sensors Letters10.1109/LSENS.2024.34571698:10(1-4)Online publication date: Oct-2024
      • (2024)SIMFALL: A Data Generator for RF-Based Fall DetectionICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10446234(8165-8169)Online publication date: 14-Apr-2024
      • (2024)M4X: Enhancing Cross-View Generalizability in RF-Based Human Activity Recognition by Exploiting Synthetic Data in Metric Learning2024 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE)10.1109/CHASE60773.2024.00015(49-60)Online publication date: 19-Jun-2024
      • (2024)ODSen: A Lightweight, Real-Time, and Robust Object Detection System via Complementary Camera and mmWave RadarIEEE Access10.1109/ACCESS.2024.345155612(129120-129133)Online publication date: 2024
      • (2024)Personalized mmWave Signal Synthesis for Human SensingWireless Artificial Intelligent Computing Systems and Applications10.1007/978-3-031-71467-2_22(267-279)Online publication date: 14-Nov-2024
      • (2023)RF Genesis: Zero-Shot Generalization of mmWave Sensing through Simulation-Based Data Synthesis and Generative Diffusion ModelsProceedings of the 21st ACM Conference on Embedded Networked Sensor Systems10.1145/3625687.3625798(28-42)Online publication date: 12-Nov-2023
      • Show More Cited By

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media