skip to main content
10.1145/3106237.3106242acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article
Public Access

AtexRace: across thread and execution sampling for in-house race detection

Published:21 August 2017Publication History

ABSTRACT

Data race is a major source of concurrency bugs. Dynamic data race detection tools (e.g., FastTrack) monitor the execu-tions of a program to report data races occurring in runtime. However, such tools incur significant overhead that slows down and perturbs executions. To address the issue, the state-of-the-art dynamic data race detection tools (e.g., LiteRace) ap-ply sampling techniques to selectively monitor memory access-es. Although they reduce overhead, they also miss many data races as confirmed by existing studies. Thus, practitioners face a dilemma on whether to use FastTrack, which detects more data races but is much slower, or LiteRace, which is faster but detects less data races. In this paper, we propose a new sam-pling approach to address the major limitations of current sampling techniques, which ignore the facts that a data race involves two threads and a program under testing is repeatedly executed. We develop a tool called AtexRace to sample memory accesses across both threads and executions. By selectively monitoring the pairs of memory accesses that have not been frequently observed in current and previous executions, AtexRace detects as many data races as FastTrack at a cost as low as LiteRace. We have compared AtexRace against FastTrack and LiteRace on both Parsec benchmark suite and a large-scale real-world MySQL Server with 223 test cases. The experiments confirm that AtexRace can be a replacement of FastTrack and LiteRace.

References

  1. B. Alpern, C.R. Attanasio, A. Cocchi, D. Lieber, S. Smith, T. Ngo, J.J. Barton, S.F. Hummel, J.C. Sheperd, and M. Mergen. Implementing jalapeño in Java. In Proc. OOPSLA, 314-324, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. C. Bienia. Ph.D. Thesis: Benchmarking modern multiprocessors. Princeton University, January 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Biswas, M. Cao, M. Zhang, M.D. Bond, and B.P. Wook. Lightweight data race detection for production runs. In Proc. CC, 11 -21,2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Biswas, M. Zhang, M. D. Bond, and B. Lucia. Valor: Efficient, Software-Only Region Conflict Exceptions. In Proc. OOPSLA, 241-259, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S.M. Blackburn, R. Garner, C. Hoffmann, A.M. Khang, K.S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S.Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. Eliot B. Moss, A. Phansalkar, D. Stefanovi¿, T. VanDrunen, D. von Dincklage, and B. Wiedermann. The Dacapo benchmarks: Java benchmarking development and analysis. In Proc. OOPSLA, 169-190, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. E. Bodden and K. Havelund. Racer: effective race detection using Aspect J. In Proc. ISSTA, 155-166, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. M.D. Bond, K. E. Coons and K. S. Mckinley. PACER: Proportional detection of data races. In Proc. PLDI, 255-268, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Bron, E. Farchi, Y. Magid, Y. Nir, and S. Ur. Applications of synchronization coverage. In Proc. PPoPP, 206-212, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. S. Burckhardt, P. Kothari, M. Musuvathi, and S. Nagarakatte. A randomized scheduler with probabilistic guarantees of finding bugs. In Proc. ASPLOS, 167-178,2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Y. Cai and L. Cao. Effective and precise dynamic detection of hidden races for Java programs. In Proc. ESEC/FSE, 450-461, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Y. Cai and W.K. Chan. LOFT: Redundant synchronization event removal for data race Detection, in Proc. ISSRE, 160-169, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Y. Cai, J. Zhang, L. Cao, and J. Liu. A deployable sampling strategy for data race detection. In Proc. FSE, 810-821, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. D. Dimitro, V. Raychev, M. Vechev, and E. Koskinen. Commutativity race detection. In Proc. PLDI, 305-315, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Erickson, M. Musuvathi, S. Burckhardt and K. Olynyk. Effective data-race detection for the kernel. In Proc. OSDI, 1-6, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Eslamimehr and J. Palsberg. Race directed scheduling of concurrent programs. In Proc. PPoPP, 301-314, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. C. Flanagan and S. N. Freund. FastTrack: efficient and precise dynamic race detection. In Proc. PLDI, 121-133, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. C. Flanagan and S. N. Freund. The RoadRunner dynamic analysis framework for concurrent programs. In Proc. PASTE, 1-8, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Flanagan and P. Godefroid. Dynamic partial-order reduction for model checking software. In Proc. POPL, 110-121, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. Hong, J. Ahn, S. Park, M. Kim, and M.J. Harrold. Testing concurrent programs to achieve high synchronization coverage. In Proc. ISSTA, 210-220, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. S. Hong, Y. Park, and M. Kim. Detecting concurrency errors in client-side Java script web applications. In Proc. ICST, 61-70, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. C. Hsiao, Y. Yu, S. Narayanasamy, Z. Kong, C.L. Pereira, G.A. Pokam, P.M. Chen, and J. Flinn. Race detection for event-driven mobile applications. In Proc. PLDI, 326-336, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. Huang, P.O. Meredith, and G. Rosu. Maximal sound predictive race detection with control flow abstraction. In Proc. PLDI, 337-348, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Jackson. Nasdaq's Facebook glitch came from 'race conditions', May 21 2012. http://www.computerworld.com/article/2504676/financial-it/nasdaq-s-facebook-glitch-came-from--race-conditions-.html, last visited on March 2016.Google ScholarGoogle Scholar
  24. G. Jin, A. Thakur, B. Liblit and S. Lu. Instrumentation and sampling strategies for cooperative concurrency bug isolation. In Proc. OOPSLA, 241-225, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. V. Kahlon, Y. Yang, S. Sankaranarayanan, and A. Gupta. Fast and accurate static data-race detection for concurrent programs. In Proc. CAV, 226-239, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. B. Kasikci, C. Zamfir, and G. Candea. RaceMob: Crowdsourced data race detection. In Proc. SOSP, 406-422, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. L. Lamport. Time, clocks, and the ordering of events. Communications of the ACM 21(7):558-565, 1978. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Z. Letko, T. Vojnar, and B. K'rena. Coverage metrics for saturation-based and search-based testing of concurrent software. In Proc. RV, 177-192, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. N.G. Leveson and C. S. Turner. An investigation of the Therac-25 accidents. Computer, 26(7), 18-41, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. S. Lu, S. Park, E. Seo, and Y.Y. Zhou, Learning from mistakes: A comprehensive study on real world concurrency bug characteristics. In Proc. ASPLOS, 329-339,2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. B. Lucia and L. Ceze. Cooperative empirical failure avoidance for multithreaded programs. In Proc. ASPLOS, 39-50. 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. C.-K. Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V. J. Reddi, and K. Hazelwood. Pin: Building Customized Program Analysis Tools with Dynamic Instrumentation. In Proc. PLDI, 191-200, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. P. Maiya, a. Kanade, and R. Majumdar. Race detection for Android applications. In Proc. PLDI, 316-325, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. D. Marino, M. Musuvathi, and S. Narayanasamy. LiteRace: effective sampling for lightweight data-race detection. In Proc. PLDI, 134-143, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. M. Musuvathi, S. Qadeer, T. Ball, G. Basier, P. A. Nainar, and I. Neamtiu. Finding and reproducing heisenbugs in concurrent programs. In Proc. OSDI, 267-280 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. S. Nagarakatte, S. Burckhardt, M. M.K. Martin, and M. Musuvathi. Multicore acceleration of priority-based schedulers for concurrency bug detection. In Proc. PLDI, 2012, 543-554, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. M. Naik, A. Aiken, and J. Whaley. Effective static race detection for Java. In Proc. PLDI, 308-319, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. S. Narayanasamy, Z. Wang, J. Tigani, A. Edwards, and B. Calder. Automatically classifying benign and harmful data races using replay analysis. In Proc. PLDI, 22-31, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. CS. Park, K. Sen, P. Hargrove, and C. Iancu. Efficient data race detection for distributed memory parallel programs. In Proc. SC, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. K. Poulsen. Software bug contributed to blackout. http://www.securityfocus.com/news/8016, Feb. 2004.Google ScholarGoogle Scholar
  41. E. Pozniansky and A. Schuster. Efficient on-the-fly data race detection in multithreaded C++ programs. In Proc. PPoPP, 179-190, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. P. Pratikakis, J.S. Foster, and M. Hicks. LOCKSMITH: context-sensitive correlation analysis for race detection. In Proc. PLDI, 320-331, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. A.K. Rajagopalan and J. Huang. RDIT: race detection from incomplete traces. In Proc. ESEC/FSE, 914 - 917, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. S. Savage, M. Burrows, G. Nelson, P. Sobalvarro and T. Anderson. Eraser: a dynamic data race detector for multithreaded programs. ACM TOCS, 15(4), 391-411, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. K. Sen. Race Directed Random Testing of Concurrent Programs. In Proc. PLDI, 11-21,2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. K. Serebryany and T. Iskhodzhanov. ThreadSanitizer: data race detection in practice. In Proc. WBIA, 62-71, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Y. Smaragdakis, J. Evans, C. Sadowski, J. Yi, and C. Flanagan. Sound predictive race detection in polynomial time. In Proc. POPL, 387-400, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. F. Sorrentino, A. Farzan, and P. Madhusudan. PENELOPE: weaving threads to expose atomicity violations. In Proc. FSE, 37-46, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Microsoft. Thread execution blocks. http://msdn.microsoft.com/enus/library/ms686708.aspxGoogle ScholarGoogle Scholar
  50. K. Vineet and C. Wang. Universal causality graphs: a precise happens-before model for detecting bugs in concurrent programs. In Proc. CAV, 434-449, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. J.W. Voung, R. Jhala, and S. Lerner. RELAY: static race detection on millions of lines of code. In Proc. FSE, 205-214, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. C. Wang, K. Hoang. Precisely Deciding Control State Reachability in Concurrent Traces with Limited Observability. In Proc. VMCAI, 376-394, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. C. Wang, M. Said, and A. Gupta. Coverage guided systematic concurrency testing. In Proc. ICSE, 221-230, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. X.W. Xie and J.L. Xue. Acculock: Accurate and Efficient detection of data races. In Proc. CGO, 201-212, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. J. Yu, S. Narayanasamy, C. Pereira, and G. Pokam. Maple: a coverage-driven testing tool for multithreaded programs. In Proc. OOPSLA, 485-502, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Y. Yu, T. Rodeheffer, and W. Chen. RaceTrack: efficient detection of data race conditions via adaptive tracking. In Proc. SOSP, 221-234, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. T. Yu, W. Srisa-an, and G. Rothermel. SimRT: An automated framework to support regression testing for data races. In Proc. ICSE, 48-59, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. K. Zhai, B.N. Xu, W.K. Chan, and T.H. Tse. CARISMA: a context-sensitive approach to race-condition sample-instance selection for multithreaded applications. In Proc. ISSTA, 221-231, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. W. Zhang, M. d. Kruijf, A. Li, S. Lu and K. Sankaralingam. ConAir: feather-weight concurrency bug recovery via single-threaded idempotent execution. In Proc. ASPLOS, 113-126. 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. AtexRace: across thread and execution sampling for in-house race detection

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ESEC/FSE 2017: Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering
        August 2017
        1073 pages
        ISBN:9781450351058
        DOI:10.1145/3106237

        Copyright © 2017 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 21 August 2017

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate112of543submissions,21%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader