skip to main content
research-article

Performance Bug Analysis and Detection for Distributed Storage and Computing Systems

Published:19 June 2023Publication History
Skip Abstract Section

Abstract

This article systematically studies 99 distributed performance bugs from five widely deployed distributed storage and computing systems (Cassandra, HBase, HDFS, Hadoop MapReduce and ZooKeeper). We present the TaxPerf database, which collectively organizes the analysis results as over 400 classification labels and over 2,500 lines of bug re-description. TaxPerf is classified into six bug categories (and 18 bug subcategories) by their root causes; resource, blocking, synchronization, optimization, configuration, and logic. TaxPerf can be used as a benchmark for performance bug studies and debug tool designs. Although it is impractical to automatically detect all categories of performance bugs in TaxPerf, we find that an important category of blocking bugs can be effectively solved by analysis tools. We analyze the cascading nature of blocking bugs and design an automatic detection tool called PCatch, which (i) performs program analysis to identify code regions whose execution time can potentially increase dramatically with the workload size; (ii) adapts the traditional happens-before model to reason about software resource contention and performance dependency relationship; and (iii) uses dynamic tracking to identify whether the slowdown propagation is contained in one job. Evaluation shows that PCatch can accurately detect blocking bugs of representative distributed storage and computing systems by observing system executions under small-scale workloads.

REFERENCES

  1. [1] Apache HBase Project. (n. d.). Retrieved January 29, 2023 from http://hbase.apache.org.Google ScholarGoogle Scholar
  2. [2] Apache ZooKeeper Project. (n. d.). Retrieved January 29, 2023 from http://zookeeper.apache.org.Google ScholarGoogle Scholar
  3. [3] HDFS Architecture. (n. d.). Retrieved January 29, 2023 from http://hadoop.apache.org/common/docs/current/hdfs_design.html.Google ScholarGoogle Scholar
  4. [4] Agrawal Hiralal and Horgan Joseph R.. 1990. Dynamic program slicing. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’90). Association for Computing Machinery, New York, NY, 246256. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Aguilera Marcos K., Mogul Jeffrey C., Wiener Janet L., Reynolds Patrick, and Muthitacharoen Athicha. 2003. Performance debugging for distributed systems of black boxes. In Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles (SOSP’03). Association for Computing Machinery, New York, NY, 7489. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Alam Mohammad Mejbah ul, Liu Tongping, Zeng Guangming, and Muzahid Abdullah. 2017. SyncPerf: Categorizing, detecting, and diagnosing synchronization performance bugs. In Proceedings of the Twelfth European Conference on Computer Systems (EuroSys’17). Association for Computing Machinery, New York, NY, 298313. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Altman Erik, Arnold Matthew, Fink Stephen, and Mitchell Nick. 2010. Performance analysis of idle programs. In Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications (OOPSLA’10). Association for Computing Machinery, New York, NY, 739753. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Apache. MapReduce-4576. (n. d.). Retrieved January 29, 2023 from https://issues.apache.org/jira/browse/MAPREDUCE-4576.Google ScholarGoogle Scholar
  9. [9] Attariyan Mona, Chow Michael, and Flinn Jason. 2012. X-ray: Automating root-cause diagnosis of performance anomalies in production software. In Proceedings of the 10th USENIX Conference on Operating Systems Design and Implementation (OSDI’12). USENIX Association, 307320.Google ScholarGoogle Scholar
  10. [10] Attariyan Mona and Flinn Jason. 2010. Automating configuration troubleshooting with dynamic information flow analysis. In Proceedings of the 9th Symposium on Operating Systems Design and Implementation (OSDI’10). USENIX Association, 237250.Google ScholarGoogle Scholar
  11. [11] Chow Michael, Meisner David, Flinn Jason, Peek Daniel, and Wenisch Thomas F.. 2014. The mystery machine: End-to-end performance analysis of large-scale Internet services. In Proceedings of the 11th USENIX Conference on Operating Systems Design and Implementation (OSDI’14). USENIX Association, 217231.Google ScholarGoogle Scholar
  12. [12] Tyson Condie, Neil Conway, Peter Alvaro, Joseph M. Hellerstein, Khaled Elmeleegy, and Russell Sears. 2010. MapReduce online. In Proceedings of the 7th USENIX Conference on Networked Systems Design and Implementation (NSDI’10). USENIX Association, 21.Google ScholarGoogle Scholar
  13. [13] Coppa Emilio, Demetrescu Camil, and Finocchi Irene. 2012. Input-sensitive profiling. In Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’12). Association for Computing Machinery, New York, NY, 8998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Curtsinger Charlie and Berger Emery D.. 2015. C oz: Finding code that counts with causal profiling. In Proceedings of the 25th Symposium on Operating Systems Principles (SOSP’15). Association for Computing Machinery, New York, NY, 184197. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] David Florian, Thomas Gaël, Lawall Julia, and Muller Gilles. 2014. Continuously measuring critical section pressure with the free-lunch profiler. In Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages & Applications (OOPSLA’14). Association for Computing Machinery, New York, NY, 291307. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Dean Daniel Joseph, Nguyen Hiep, Gu Xiaohui, Zhang Hui, Rhee Junghwan, Arora Nipun, and Jiang Geoff. 2014. PerfScope: Practical online server performance bug inference in production cloud computing infrastructures. In Proceedings of the ACM Symposium on Cloud Computing (SOCC’14). Association for Computing Machinery, New York, NY, 113. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Deng Dongdong, Zhang Wei, and Lu Shan. 2013. Efficient concurrency-bug detection across inputs. In Proceedings of the 2013 ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages & Applications, OOPSLA 2013, part of SPLASH 2013, Indianapolis, IN, USA, October 26-31, 2013, Hosking Antony L., Eugster Patrick Th., and Lopes Cristina V. (Eds.). ACM, 785802. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Dufour Bruno, Ryder Barbara G., and Sevitsky Gary. 2008. A scalable technique for characterizing the usage of temporaries in framework-intensive Java applications. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering (SIGSOFT’08/FSE-16). Association for Computing Machinery, New York, NY, 5970. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Goldsmith Simon F., Aiken Alex S., and Wilkerson Daniel S.. 2007. Measuring empirical computational complexity. In Proceedings of the the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering (ESEC-FSE’07). Association for Computing Machinery, New York, NY, 395404. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Gulwani Sumit. 2009. SPEED: Symbolic complexity bound analysis. Computer Aided Verification (CAV’09), A. Bouajjani and O. Maler (Eds.). Lecture Notes in Computer Science, Vol. 5643, Springer, Berlin, Heidelberg. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Gulwani Sumit and Zuleger Florian. 2010. The reachability-bound problem. In Proceedings of the 31st ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’10). Association for Computing Machinery, New York, NY, 292304. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Gunawi Haryadi S., Hao Mingzhe, Leesatapornwongsa Tanakorn, Patana-anake Tiratat, Do Thanh, Adityatama Jeffry, Eliazar Kurnia J., Laksono Agung, Lukman Jeffrey F., Martin Vincentius, and Satria Anang D.. 2014. What bugs live in the cloud? A study of 3000+ issues in cloud systems. In Proceedings of the ACM Symposium on Cloud Computing (SOCC’14). Association for Computing Machinery, New York, NY, 114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Diwaker Gupta, Kashi Venkatesh Vishwanath, Marvin McNett, Amin Vahdat, Ken Yocum, Alex Snoeren, and Geoffrey M. Voelker. 2011. DieCast: Testing distributed systems with an accurate scale model. ACM Trans. Comput. Syst. 29, 2 (2011), 48 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Herodotou Herodotos, Dong Fei, and Babu Shivnath. 2011. No one size fits all: Automatic cluster sizing for data-intensive analytics. In Proceedings of the 2nd ACM Symposium on Cloud Computing (SoCC’11). Association for Computing Machinery, New York, NY, 114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Huang Peng, Ma Xiao, Shen Dongcai, and Zhou Yuanyuan. 2014. Performance regression testing target prioritization via performance risk analysis. In Proceedings of the 36th International Conference on Software Engineering (ICSE’14). Association for Computing Machinery, New York, NY, 6071. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] IBM. Main Page - WalaWiki. (n. d.). Retrieved January 29, 2023 from http://wala.sourceforge.net/wiki/index.php/Main_Page.Google ScholarGoogle Scholar
  27. [27] javassist jboss. Javassist. (n. d.). Retrieved January 29, 2023 from http://jboss-javassist.github.io/javassist/.Google ScholarGoogle Scholar
  28. [28] Jin Guoliang, Song Linhai, Shi Xiaoming, Scherpelz Joel, and Lu Shan. 2012. Understanding and detecting real-world performance bugs. In Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’12). Association for Computing Machinery, New York, NY, 7788. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Killian Charles, Anderson James W., Braud Ryan, Jhala Ranjit, and Vahdat Amin. 2007. Mace: Language support for building distributed systems. In Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’07). Association for Computing Machinery, New York, NY, 179188. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Killian Charles, Nagaraj Karthik, Pervez Salman, Braud Ryan, Anderson James W., and Jhala Ranjit. 2010. Finding latent performance bugs in systems implementations. In Proceedings of the Eighteenth ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE’10). Association for Computing Machinery, New York, NY, 1726. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Lakshman Avinash and Malik Prashant. 2010. Cassandra - a decentralized structured storage system. ACM SIGOPS Operating Systems Review 44, 2 (2010), 3540.Google ScholarGoogle Scholar
  32. [32] Lamport Leslie. 1978. Time, clocks, and the ordering of events in a distributed system. Communications of the ACM 21, 7 (July1978), 558565. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Leesatapornwongsa Tanakorn, Lukman Jeffrey F., Lu Shan, and Gunawi Haryadi S.. 2016. TaxDC: A taxonomy of non-deterministic concurrency bugs in datacenter distributed systems. In Proceedings of the 21st International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’16), Atlanta, GA, USA, April 2-6, 2016, Conte Tom and Zhou Yuanyuan (Eds.). ACM, 517530. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Li Jiaxin, Chen Yuxi, Liu Haopeng, Lu Shan, Zhang Yiming, Gunawi Haryadi S., Gu Xiaohui, Lu Xicheng, and Li Dongsheng. 2018. PCatch: Automatically detecting performance cascading bugs in cloud systems. In Proceedings of the 13th EuroSys Conference (EuroSys’18), Porto, Portugal, April 23-26, 2018, Oliveira Rui, Felber Pascal, and Hu Y. Charlie (Eds.). ACM, 7:17:14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Liu Haopeng, Chen Yuxi, and Lu Shan. 2016. Understanding and generating high quality patches for concurrency bugs. In Proceedings of the 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE’16, Seattle, WA, November 13-18, 2016, Zimmermann Thomas, Cleland-Huang Jane, and Su Zhendong (Eds.). ACM, 715726. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Liu Haopeng, Li Guangpu, Lukman Jeffrey F., Li Jiaxin, Lu Shan, Gunawi Haryadi S., and Tian Chen. 2017. DCatch: Automatically detecting distributed concurrency bugs in cloud systems. In Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’17). Association for Computing Machinery, New York, NY, 677691. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Liu Tongping, Tian Chen, Hu Ziang, and Berger Emery D.. 2014. PREDATOR: Predictive false sharing detection. In Proceedings of the 19th ACM SIGPLAN Symposium on Principles and Practice of Parallel programming (PPoPP’14). Association for Computing Machinery, New York, NY, 314. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Mace Jonathan, Roelke Ryan, and Fonseca Rodrigo. 2015. Pivot tracing: Dynamic causal monitoring for distributed systems. In Proceedings of the 25th Symposium on Operating Systems Principles (SOSP’15). Association for Computing Machinery, New York, NY, 378393. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Musuvathi Madanlal, Qadeer Shaz, Ball Thomas, Basler Gerard, Nainar Piramanayagam Arumuga, and Neamtiu Iulian. 2008. Finding and reproducing heisenbugs in concurrent programs. In Proceedings of the 8th USENIX conference on Operating Systems Design and Implementation (OSDI’08). USENIX Association, 267280.Google ScholarGoogle Scholar
  40. [40] Nanavati Mihir, Spear Mark, Taylor Nathan, Rajagopalan Shriram, Meyer Dutch T., Aiello William, and Warfield Andrew. 2013. Whose cache line is it anyway?: Operating system support for live detection and repair of false sharing. In Proceedings of the 8th ACM European Conference on Computer Systems (EuroSys’13). Association for Computing Machinery, New York, NY, 141154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Netzer Robert H. B. and Miller Barton P.. 1991. Improving the accuracy of data Race detection. In Proceedings of the third ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP’91). Association for Computing Machinery, New York, NY, 133144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Nistor Adrian, Chang Po-Chun, Radoi Cosmin, and Lu Shan. 2015. CARAMEL: Detecting and fixing performance problems that have non-intrusive fixes. In Proceedings of the 37th International Conference on Software Engineering (ICSE’15), Volume 1, IEEE Press, 902912.Google ScholarGoogle Scholar
  43. [43] Nistor Adrian, Song Linhai, Marinov Darko, and Lu Shan. 2013. Toddler: Detecting performance problems via similar memory-access patterns. In Proceedings of the International Conference on Software Engineering (ICSE’13). IEEE Press, 562571.Google ScholarGoogle Scholar
  44. [44] Olivo Oswaldo, Dillig Isil, and Lin Calvin. 2015. Static detection of asymptotic performance bugs in collection traversals. In Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’15). Association for Computing Machinery, New York, NY, 369378. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Oracle. HPROF: A heap/cpu profiling tool. (n. d.). Retrieved January 29, 2023 from http://docs.oracle.com/javase/7/docs/technotes/samples/hprof.html.Google ScholarGoogle Scholar
  46. [46] Shen Kai, Zhong Ming, and Li Chuanpeng. 2005. I/O system performance debugging using model-driven anomaly characterization. In Proceedings of the 4th conference on USENIX Conference on File and Storage Technologies (FAST’05), Volume 4, USENIX Association, 23.Google ScholarGoogle Scholar
  47. [47] Song Linhai and Lu Shan. 2014. Statistical debugging for real-world performance problems. In Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages & Applications (OOPSLA’14). Association for Computing Machinery, New York, NY, 561578. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Stewart Christopher, Zhong Ming, Shen Kai, and O’Neill Thomas. 2006. Comprehensive depiction of configuration-dependent performance anomalies in distributed server systems. In Proceedings of the Second conference on Hot topics in System Dependability (HotDep’06). USENIX Association, 1.Google ScholarGoogle Scholar
  49. [49] Tan Jiaqi, Kavulya Soila, Gandhi Rajeev, and Narasimhan Priya. 2010. Visual, log-based causal tracing for performance debugging of MaprReduce systems. In Proceedings of the 2010 IEEE 30th International Conference on Distributed Computing Systems (ICDCS’10). IEEE Computer Society, 795806. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Wang Shu, Li Chi, Hoffmann Henry, Lu Shan, Sentosa William, and Kistijantoro Achmad Imam. 2018. Understanding and auto-adjusting performance-sensitive configurations. In Proceedings of the 23rd International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’18). Association for Computing Machinery, New York, NY, 154168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Wang Yang, Kapritsos Manos, Schmidt Lara, Alvisi Lorenzo, and Dahlin Mike. 2014. Exalt: Empowering researchers to evaluate large-scale storage systems. In Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation (NSDI’14). USENIX Association, 129141.Google ScholarGoogle Scholar
  52. [52] Weiser Mark. 1981. Program slicing. In Proceedings of the 2013 International Conference on Software Engineering (ICSE’81). 439449.Google ScholarGoogle Scholar
  53. [53] Wert Alexander, Happe Jens, and Happe Lucia. 2013. Supporting swift reaction: Automatically uncovering performance problems by systematic experiments. In Proceedings of the 2013 International Conference on Software Engineering (ICSE’13). IEEE Press, 552561.Google ScholarGoogle Scholar
  54. [54] Xiao Xusheng, Han Shi, Zhang Dongmei, and Xie Tao. 2013. Context-sensitive delta inference for identifying workload-dependent performance bottlenecks. In Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA’13). Association for Computing Machinery, New York, NY, 90100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Xu Guoqing, Arnold Matthew, Mitchell Nick, Rountev Atanas, and Sevitsky Gary. 2009. Go with the flow: Profiling copies to find runtime bloat. In Proceedings of the 30th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’09). Association for Computing Machinery, New York, NY, 419430. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Xu Guoqing, Mitchell Nick, Arnold Matthew, Rountev Atanas, Schonberg Edith, and Sevitsky Gary. 2010. Finding low-utility data structures. In Proceedings of the 31st ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’10). Association for Computing Machinery, New York, NY, 174186. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Xu Wei, Huang Ling, Fox Armando, Patterson David, and Jordan Michael I.. 2009. Detecting large-scale system problems by mining console logs. In Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles (SOSP’09). Association for Computing Machinery, New York, NY, 117132. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Yu Tingting and Pradel Michael. 2016. SyncProf: Detecting, localizing, and optimizing synchronization bottlenecks. In Proceedings of the 25th International Symposium on Software Testing and Analysis (ISSTA’16). Association for Computing Machinery, New York, NY, 389400. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Yu Xiao, Han Shi, Zhang Dongmei, and Xie Tao. 2014. Comprehending performance from real-world execution traces: A device-driver case. In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’14). Association for Computing Machinery, New York, NY, 193206. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Zalewski Piotr and Hwang Jinwoo. IBM thread and monitor dump analyze for Java. (n. d.). Retrieved January 29, 2023 from https://www.ibm.com/developerworks/community/groups/service/html/communityview?communityUuid=2245aa39-fa5c-4475-b891-14c205f7333c.Google ScholarGoogle Scholar
  61. [61] Zaparanuks Dmitrijs and Hauswirth Matthias. 2012. Algorithmic profiling. In Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’12). Association for Computing Machinery, New York, NY, 6776. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Zhai Ennan, Chen Ruichuan, Wolinsky David Isaac, and Ford Bryan. 2014. Heading off correlated failures through independence-as-a-service. In Proceedings of the 11th USENIX Conference on Operating Systems Design and Implementation (OSDI’14). USENIX Association, 317334.Google ScholarGoogle Scholar
  63. [63] Zhang Xiangyu, Tallam Sriraman, and Gupta Rajiv. 2006. Dynamic slicing long running programs through execution fast forwarding. In Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT’06/FSE-14). Association for Computing Machinery, New York, NY, 8191. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Zhao Xu, Rodrigues Kirk, Luo Yu, Yuan Ding, and Stumm Michael. 2016. Non-intrusive performance profiling for entire software stacks based on the flow reconstruction principle. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation (OSDI’16). USENIX Association, 603618.Google ScholarGoogle Scholar

Index Terms

  1. Performance Bug Analysis and Detection for Distributed Storage and Computing Systems

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Storage
          ACM Transactions on Storage  Volume 19, Issue 3
          August 2023
          233 pages
          ISSN:1553-3077
          EISSN:1553-3093
          DOI:10.1145/3604654
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 19 June 2023
          • Online AM: 18 January 2023
          • Accepted: 29 December 2022
          • Revised: 18 December 2022
          • Received: 5 April 2022
          Published in tos Volume 19, Issue 3

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text