skip to main content
10.1145/3468264.3468549acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article
Public Access
Artifacts Evaluated & Reusable / v1.1

Sound and efficient concurrency bug prediction

Authors Info & Claims
Published:18 August 2021Publication History

ABSTRACT

Concurrency bugs are extremely difficult to detect. Recently, several dynamic techniques achieve sound analysis. M2 is even complete for two threads. It is designed to decide whether two events can occur consecutively. However, real-world concurrency bugs can involve more events and threads. Some can occur when the order of two or more events can be exchanged even if they occur not consecutively. We propose a new technique SeqCheck to soundly decide whether a sequence of events can occur in a specified order. The ordered sequence represents a potential concurrency bug. And several known forms of concurrency bugs can be easily encoded into event sequences where each represents a way that the bug can occur. To achieve it, SeqCheck explicitly analyzes branch events and includes a set of efficient algorithms. We show that SeqCheck is sound; and it is also complete on traces of two threads.

We have implemented SeqCheck to detect three types of concurrency bugs and evaluated it on 51 Java benchmarks producing up to billions of events. Compared with M2 and other three recent sound race detectors, SeqCheck detected 333 races in ~30 minutes; while others detected from 130 to 285 races in ~6 to ~12 hours. SeqCheck detected 20 deadlocks in ~6 seconds. This is only one less than Dirk; but Dirk spent more than one hour. SeqCheck also detected 30 atomicity violations in ~20 minutes. The evaluation shows SeqCheck can significantly outperform existing concurrency bug detectors.

References

  1. Swarnendu Biswas, Man Cao, Minjia Zhang, Michael D. Bond, and Benjamin P. Wood. 2017. Lightweight data race detection for production runs. In Proceedings of the 26th International Conference on Compiler Construction (CC’17). Association for Computing Machinery, ustin, TX, USA. 11–21. isbn:9781450352338 https://doi.org/10.1145/3033019.3033020 Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Swarnendu Biswas, Minjia Zhang, Michael D. Bond, and Brandon Lucia. 2015. Valor: efficient, software-only region conflict exceptions. ACM SIGPLAN Notices, 50, 10 (2015), Oct., 241–259. issn:9781450336895 https://doi.org/10.1145/2858965.2814292 Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. M. Blackburn, R. Garner, C. Hoffman, A. M. Khan, K. S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, A. Phansalkar, D. Stefanović, T. VanDrunen, D. von Dincklage, and B. Wiedermann. 2006. The DaCapo Benchmarks: Java Benchmarking Development and Analysis. In OOPSLA ’06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-Oriented Programing, Systems, Languages, and Applications. ACM Press, New York, NY, USA. 169–190. https://doi.org/10.1145/1167473.1167488 Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Michael D. Bond, Katherine E. Coons, and Kathryn S. McKinley. 2010. PACER: Proportional Detection of Data Races. In Proceedings of the 31st ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ’10). Association for Computing Machinery, New York, NY, USA. 255–268. isbn:9781450300193 https://doi.org/10.1145/1806596.1806626 Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Yan Cai and Lingwei Cao. 2015. Effective and Precise Dynamic Detection of Hidden Races for Java Programs. ESEC/FSE 2015. Association for Computing Machinery, New York, NY, USA. 450–461. isbn:9781450336758 https://doi.org/10.1145/2786805.2786839 Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Yan Cai and W. K. Chan. 2012. MagicFuzzer: Scalable Deadlock Detection for Large-scale Applications. In Proceedings of the 34th International Conference on Software Engineering (ICSE ’12). IEEE Press, Piscataway, NJ, USA. 606–616. isbn:978-1-4673-1067-3 http://dl.acm.org/citation.cfm?id=2337223.2337294Google ScholarGoogle Scholar
  7. Yan Cai, Jian Zhang, Lingwei Cao, and Jian Liu. 2016. A Deployable Sampling Strategy for Data Race Detection. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). Association for Computing Machinery, New York, NY, USA. 810–821. isbn:9781450342186 https://doi.org/10.1145/2950290.2950310 Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Yan Cai, Biyun Zhu, Ruijie Meng, Hao Yun, Liang He, Purui Su, and Bin Liang. 2019. Detecting Concurrency Memory Corruption Vulnerabilities. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2019). Association for Computing Machinery, New York, NY, USA. 706–717. isbn:9781450355728 https://doi.org/10.1145/3338906.3338927 Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Jong-Deok Choi, Keunwoo Lee, and Alexey Loginov. 2002. Efficient and precise datarace detection for multithreaded object-oriented programs. ACM Sigplan Notices, 37, 5 (2002), June, 258–269. https://doi.org/10.1145/543552.512560 Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Intel Corporation. 2016. Intel Inspector. https://software.intel.com/en-us/intel-inspector-xeGoogle ScholarGoogle Scholar
  11. Anne Dinning and Edith Schonberg. 1991. Detecting access anomalies in programs with critical sections. ACM SIGPLAN Notices, 26, 12 (1991), Dec., 85–96. https://doi.org/10.1145/127695.122767 Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Laura Effinger-Dean, Brandon lucia, luis Ceze, Dan Grossman, and Hans-J Boehm. 2012. IFRit: interference-free regions for dynamic data-race detection. In Acm International Conference on Object Oriented Programming Systems Languages & Applications (OOPSLA ’12). isbn:9781450315616 https://doi.org/10.1145/2398857.2384650 Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Tayfun Elmas, Shaz Qadeer, and Serdar Tasiran. 2007. Goldilocks: a race and transaction-aware java runtime. Acm Sigplan Notices, 42, 6 (2007), June, 245–255. https://doi.org/10.1145/1273442.1250762 Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Dawson Engler and Ken Ashcraft. 2003. RacerX : Effective, static detection of race conditions and deadlocks. ACM SIGOPS Operating Systems Review, 37, 5 (2003), Oct., 237–252. issn:1581137575 https://doi.org/10.1145/945445.945468 Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. John Erickson, Madanlal Musuvathi, Sebastian Burckhardt, and Kirk Olynyk. 2010. Effective data-race detection for the kernel. In Proceedings of the 9th USENIX conference on Operating systems design and implementation (OSDI ’10). 151–162. https://doi.org/10.5555/1924943.1924954Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Azadeh Farzan, P. Madhusudan, Niloofar Razavi, and Francesco Sorrentino. 2012. Predicting Null-Pointer Dereferences in Concurrent Programs. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE ’12). Association for Computing Machinery, New York, NY, USA. Article 47, 11 pages. isbn:9781450316149 https://doi.org/10.1145/2393596.2393651 Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Mingdong Feng and Charles E. Leiserson. 1997. Efficient detection of determinacy races in Cilk programs. In Proceedings of the ninth annual ACM symposium on Parallel algorithms and architectures (SPAA ’97). 1–11.Google ScholarGoogle Scholar
  18. Peter M. Fenwick. 1994. A New Data Structure for Cumulative Frequency Tables. Softw. Pract. Exper., 24, 3 (1994), March, 327–336. issn:0038-0644 https://doi.org/10.1002/spe.4380240306 Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Cormac Flanagan and Stephen Freund. 2010. The RoadRunner Dynamic Analysis Framework for Concurrent Programs. In Proceedings of the 9th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering (PASTE ’10). 1–8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Cormac Flanagan and Stephen N. Freund. 2009. FastTrack: Efficient and Precise Dynamic Race Detection. In Proceedings of the 30th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ’09). ACM, New York, NY, USA. 121–133. isbn:978-1-60558-392-1 https://doi.org/10.1145/1542476.1542490 Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Kaan Genç, Jake Roemer, Yufan Xu, and Michael D. Bond. 2019. Dependence-Aware, Unbounded Sound Predictive Race Detection. Proc. ACM Program. Lang., 3, OOPSLA (2019), Article 179, Oct., 30 pages. https://doi.org/10.1145/3360605 Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Yu Guo, Yan Cai, and Zijiang Yang. 2017. AtexRace: Across Thread and Execution Sampling for in-House Race Detection. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017). Association for Computing Machinery, New York, NY, USA. 315–325. isbn:9781450351058 https://doi.org/10.1145/3106237.3106242 Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Jeff Huang. 2018. UFO: Predictive Concurrency Use-after-Free Detection. In Proceedings of the 40th International Conference on Software Engineering (ICSE ’18). Association for Computing Machinery, New York, NY, USA. 609–619. isbn:9781450356381 https://doi.org/10.1145/3180155.3180225 Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Jeff Huang, Patrick O’Neil Meredith, and Grigore Rosu. 2014. Maximal Sound Predictive Race Detection with Control Flow Abstraction. In Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ’14). Association for Computing Machinery, New York, NY, USA. 337–348. isbn:9781450327848 https://doi.org/10.1145/2594291.2594315 Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Joab Jackson. 2012. Nasdaq’s Facebook glitch came from ‘race conditions’.. http://www.computerworld.com/s/article/9227350Google ScholarGoogle Scholar
  26. Pallavi Joshi, Chang-Seo Park, Koushik Sen, and Mayur Naik. 2009. A Randomized Dynamic Program Analysis Technique for Detecting Real Deadlocks. In Proceedings of the 30th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ’09). Association for Computing Machinery, New York, NY, USA. 110–120. isbn:9781605583921 https://doi.org/10.1145/1542476.1542489 Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Horatiu Jula, Daniel Tralamazza, Cristian Zamfir, and George Candea. 2008. Deadlock Immunity: Enabling Systems to Defend against Deadlocks. In Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation (OSDI’08). USENIX Association, USA. 295–308.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Christian Gram Kalhauge and Jens Palsberg. 2018. Sound Deadlock Prediction. Proc. ACM Program. Lang., 2, OOPSLA (2018), Article 146, Oct., 29 pages. https://doi.org/10.1145/3276516 Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Dileep Kini, Umang Mathur, and Mahesh Viswanathan. 2017. Dynamic Race Prediction in Linear Time. PLDI 2017. Association for Computing Machinery, New York, NY, USA. 157–170. isbn:9781450349888 https://doi.org/10.1145/3062341.3062374 Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Lamport. 1979. How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs. IEEE Trans. Comput., C-28, 9 (1979), 690–691. https://doi.org/10.1109/TC.1979.1675439 Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Leslie Lamport. 1978. Time, Clocks, and the Ordering of Events in a Distributed System. Commun. ACM, 21, 7 (1978), July, 558–565. issn:0001-0782 https://doi.org/10.1145/359545.359563 Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. N. G. Leveson and C. S. Turner. 1993. An investigation of the Therac-25 accidents. Computer, 26, 7 (1993), 18–41. https://doi.org/10.1109/MC.1993.274940 Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Shan Lu, Soyeon Park, Eunsoo Seo, and Yuanyuan Zhou. 2008. Learning from Mistakes: A Comprehensive Study on Real World Concurrency Bug Characteristics. In Proceedings of the 13th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS XIII). Association for Computing Machinery, New York, NY, USA. 329–339. isbn:9781595939586 https://doi.org/10.1145/1346281.1346323 Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Brandon Lucia, Joseph Devietti, Karin Strauss, and Luis Ceze. 2008. Atom-Aid: Detecting and Surviving Atomicity Violations. In Proceedings of the 35th Annual International Symposium on Computer Architecture (ISCA ’08). IEEE Computer Society, USA. 277–288. isbn:9780769531748 https://doi.org/10.1109/ISCA.2008.4 Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Umang Mathur. 2020. RAPID : Dynamic Analysis for Concurrent Programs. https://github.com/umangm/rapidGoogle ScholarGoogle Scholar
  36. Umang Mathur, Dileep Kini, and Mahesh Viswanathan. 2018. What Happens-after the First Race? Enhancing the Predictive Power of Happens-before Based Dynamic Race Detection. 2, OOPSLA (2018), Article 145, Oct., 29 pages. https://doi.org/10.1145/3276515 Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Umang Mathur, Andreas Pavlogiannis, and Mahesh Viswanathan. 2021. Optimal Prediction of Synchronization-Preserving Races. Proc. ACM Program. Lang., 5, POPL (2021), Article 36, Jan., 29 pages. https://doi.org/10.1145/3434317 Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Mayur Naik, Alex Aiken, and John Whaley. 2006. Effective static race detection for Java. ACM SIGPLAN Notices, 41, 6 (2006), June, 308–319. issn:1595933204 https://doi.org/10.1145/1133255.1134018 Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Hiroyasu Nishiyama. 2004. Detecting Data Races Using Dynamic Escape Analysis Based on Read Barrier. In Proceedings of the 3rd conference on Virtual Machine Research And Technology Symposium (VM ’04). New York, NY, USA. 127–138. https://doi.org/10.5555/1267242.1267252Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Robert O’Callahan and Jong-Deok Choi. 2003. Hybrid dynamic data race detection. In Proceedings of the ninth ACM SIGPLAN symposium on Principles and practice of parallel programming (PPoPP’03). New York, NY, USA. 167–178. isbn:1581135882 https://doi.org/10.1145/966049.781528 Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Chang-Seo Park and Koushik Sen. 2008. Randomized Active Atomicity Violation Detection in Concurrent Programs. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT ’08/FSE-16). Association for Computing Machinery, New York, NY, USA. 135–145. isbn:9781595939951 https://doi.org/10.1145/1453101.1453121 Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Chang-Seo Park and Koushik Sen. 2008. Randomized Active Atomicity Violation Detection in Concurrent Programs. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT ’08/FSE-16). Association for Computing Machinery, New York, NY, USA. 135–145. isbn:9781595939951 https://doi.org/10.1145/1453101.1453121 Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Andreas Pavlogiannis. 2020. Fast, sound, and effectively complete dynamic race prediction. Proc. ACM Program. Lang., 4, POPL (2020), 17:1–17:29. https://doi.org/10.1145/3371085 Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Kevin Poulsen. 2012. Software bug contributed to blackout. Security Focus.. http://www.securityfocus.com/news/8016Google ScholarGoogle Scholar
  45. Eli Pozniansky and Assaf Schuster. 2007. Multirace: Efficient On-the-fly Data Race Detection In Multithreaded C++ Programs. ACM Trans. Comput. Syst., 19, 3 (2007), Nov., 327–340. https://doi.org/10.1002/cpe.1064 Google ScholarGoogle ScholarCross RefCross Ref
  46. Polyvios Pratikakis, Jeffrey S. Foster, and Michael Hicks. 2011. LOCKSMITH: Practical static race detection for C. ACM Transactions on Programming Languages and Systems, 33, 1 (2011), Jan., https://doi.org/10.1145/1889997.1890000 Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Christoph von Praun and Thomas R. Gross. 2001. Object Race Detection. ACM Sigplan Notices, 36, 11 (2001), Nov., 70–82. https://doi.org/10.1145/504311.504288 Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Cosmin Radoi and Danny Dig. 2013. Practical static race detection for Java parallel loops. In Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA 2013). 178–190.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Raghavan Raman, Jisheng Zhao, Vivek Sarkar, Martin Vechev, and Eran Yahav. 2012. Scalable and precise dynamic datarace detection for structured parallelism. ACM SIGPLAN Notices, 47, 6 (2012), June, 531–542. issn:9781450312059 https://doi.org/10.1145/2345156.2254127 Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Jake Roemer, Kaan Genc, and Michael D. Bond. 2018. High-Coverage, Unbounded Sound Predictive Race Detection. PLDI 2018. 374–389. isbn:9781450356985 https://doi.org/10.1145/3192366.3192385 Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Stefan Savage, Michael Burrows, Greg Nelson, Patrick Sobalvarro, and Thomas Anderson. 1997. Eraser: A Dynamic Data Race Detector for Multithreaded Programs. ACM Trans. Comput. Syst., 15, 4 (1997), Nov., 391–411. issn:0734-2071 https://doi.org/10.1145/265924.265927 Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Koushik Sen. 2008. Race directed random testing of concurrent programs. In ACM SIGPLAN Notices (PLDI ’08). 11–21. isbn:9781595938602 https://doi.org/10.1145/1379022.1375584 Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Konstantin Serebryany and Timur Iskhodzhanov. 2009. ThreadSanitizer: data race detection in practice. In Proceedings of the Workshop on Binary Instrumentation and Applications (WBIA ’09). 62–71. isbn:9781605587936 https://doi.org/10.1145/1791194.1791203 Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Konstantin Serebryany, Alexander Potapenko, Timur Iskhodzhanov, and Dmitriy Vyukov. 2012. Dynamic Race Detection with LLVM Compiler. In Runtime Verification (RV 2011). 110–114. isbn:9783642298592 https://doi.org/10.1007/978-3-642-29860-8_9 Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Yannis Smaragdakis, Jacob Evans, Caitlin Sadowski, Jaeheon Yi, and Cormac Flanagan. 2012. Sound Predictive Race Detection in Polynomial Time. In Proceedings of the 39th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL ’12). Association for Computing Machinery, New York, NY, USA. 387–400. isbn:9781450310833 https://doi.org/10.1145/2103656.2103702 Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Francesco Sorrentino, Azadeh Farzan, and P. Madhusudan. 2010. PENELOPE: Weaving Threads to Expose Atomicity Violations. In Proceedings of the Eighteenth ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE ’10). Association for Computing Machinery, New York, NY, USA. 37–46. isbn:9781605587912 https://doi.org/10.1145/1882291.1882300 Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Kaushik Veeraraghavan, Peter M. Chen, Jason Flinn, and Satish Narayanasamy. 2011. Detecting and surviving data races using complementary schedules. In Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles (SOSP ’11). 369–384. https://doi.org/10.1145/2043556.2043590 Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Jan Wen Voung, Ranjit Jhala, and Sorin Lerner. 2007. RELAY: static race detection on millions of lines of code. In Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering (ESEC-FSE ’07). 205–214.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Adarsh Yoga, Santosh Nagarakatte, and Aarti Gupta. 2016. Parallel data race detection for task parallel programs with locks. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). 833–845.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Yuan Yu, Tom Rodeheffer, and Wei Chen. 2005. RaceTrack: Efficient Detection of Data Race Conditions via Adaptive Tracking. In Proceedings of the Twentieth ACM Symposium on Operating Systems Principles (SOSP ’05). Association for Computing Machinery, New York, NY, USA. 221–234. isbn:1595930795 https://doi.org/10.1145/1095810.1095832 Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Sheng Zhan and Jeff Huang. 2016. ECHO: instantaneous in situ race detection in the IDE. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). 775–786.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Sound and efficient concurrency bug prediction

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEC/FSE 2021: Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering
      August 2021
      1690 pages
      ISBN:9781450385626
      DOI:10.1145/3468264

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 August 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate112of543submissions,21%

      Upcoming Conference

      FSE '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader