Skip to main content

DataRaceOnAccelerator – A Micro-benchmark Suite for Evaluating Correctness Tools Targeting Accelerators

  • Conference paper
  • First Online:
Euro-Par 2019: Parallel Processing Workshops (Euro-Par 2019)

Abstract

The advent of hardware accelerators over the past decade has significantly increased the complexity of modern parallel applications. For correctness, applications must synchronize the host with accelerators properly to avoid defects. Considering concurrency defects on accelerators are hard to detect and debug, researchers have proposed several correctness tools. However, existing correctness tools targeting accelerators are not comprehensively and objectively evaluated since there exist few available micro-benchmarks that can test the functionality of a correctness tool.

In this paper, we propose DataRaceOnAccelerator (DRACC), a micro-benchmark suite designed for evaluating the capabilities of correctness tools for accelerators. DRACC provides micro-benchmarks for common error patterns in CUDA, OpenMP, and OpenACC programs. These micro-benchmarks can be used to measure the precision and recall of a correctness tool. We categorize all micro-benchmarks into different groups based on their error patterns, and analyze the necessary runtime information to capture each error pattern. To demonstrate the effectiveness of DRACC, we utilized it to evaluate four existing correctness tools: ThreadSanitizer, Archer, GPUVerify, and CUDA-MEMCHECK. The evaluation results demonstrate that DRACC is capable of revealing the strengths and weaknesses of a correctness tool.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.top500.org/.

  2. 2.

    https://github.com/RWTH-HPC/DRACC.

  3. 3.

    https://github.com/PRUNERS/openmp/tree/archer_70 (303a691).

  4. 4.

    http://multicore.doc.ic.ac.uk/tools/GPUVerify/download.php.

  5. 5.

    https://docs.nvidia.com/cuda/cuda-memcheck/index.html.

References

  1. Atzeni, S., Gopalakrishnan, G., et al.: ARCHER: effectively spotting data races in large OpenMP applications. In: 2016 IEEE International Parallel and Distributed Processing Symposium, IPDPS, pp. 53–62 (2016)

    Google Scholar 

  2. Atzeni, S., Gopalakrishnan, G., et al.: SWORD: a bounded memory-overhead detector of OpenMP data races in production runs. In: 2018 IEEE International Parallel and Distributed Processing Symposium, IPDPS, pp. 845–854 (2018)

    Google Scholar 

  3. Betts, A., Chong, N., et al.: GPUVerify: a verifier for GPU kernels. ACM SIGPLAN Not. 47, 113–132 (2012)

    Article  Google Scholar 

  4. Diaz, J.M., Pophale, S., Hernandez, O., Bernholdt, D.E., Chandrasekaran, S.: OpenMP 4.5 validation and verification suite for device offload. In: de Supinski, B.R., Valero-Lara, P., Martorell, X., Mateo Bellido, S., Labarta, J. (eds.) IWOMP 2018. LNCS, vol. 11128, pp. 82–95. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98521-3_6

    Chapter  Google Scholar 

  5. Eizenberg, A., Peng, Y., et al.: BARRACUDA: binary-level analysis of runtime RAces in CUDA programs. SIGPLAN Not. 52(6), 126–140 (2017)

    Article  Google Scholar 

  6. Friedline, K., Chandrasekaran, S., Lopez, M.G., Hernandez, O.: OpenACC 2.5 validation testsuite targeting multiple architectures. In: Kunkel, J.M., Yokota, R., Taufer, M., Shalf, J. (eds.) ISC High Performance 2017. LNCS, vol. 10524, pp. 557–575. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67630-2_39

    Chapter  Google Scholar 

  7. Gu, Y., Mellor-Crummey, J.: Dynamic data race detection for OpenMP programs. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, SC 2018, pp. 61:1–61:12. IEEE (2018)

    Google Scholar 

  8. Juckeland, G., Grund, A., Nagel, W.E.: Performance portable applications for hardware accelerators: lessons learned from SPEC ACCEL. In: IEEE International Parallel and Distributed Processing Symposium Workshop (2015)

    Google Scholar 

  9. Liao, C., Lin, P.H., et al.: DataRaceBench: a benchmark suite for systematic evaluation of data race detection tools. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2017. ACM (2017)

    Google Scholar 

  10. Liao, C., Lin, P., et al.: A semantics-driven approach to improving DataRaceBench’s OpenMP standard coverage. In: Evolving OpenMP for Evolving Architectures (IWOMP 2018, Barcelona, Spain), pp. 189–202 (2018)

    Google Scholar 

  11. Lu, S., Park, S., et al.: Learning from mistakes - a comprehensive study on real world concurrency bug characteristics. ACM SIGOPS Oper. Syst. Rev. 42(2), 329–339 (2008)

    Article  Google Scholar 

  12. Müller, M., Neytchev, P.: An OpenMP validation suite. In: Fifth European Workshop on OpenMP, Aachen University, Germany (2003)

    Google Scholar 

  13. Müller, M.S., Niethammer, C., et al.: Validating OpenMP 2.5 for Fortran and C/C++. In: Sixth European Workshop on OpenMP (2004)

    Google Scholar 

  14. Münchhalfen, J.F., Hilbrich, T., Protze, J., Terboven, C., Müller, M.S.: Classification of common errors in OpenMP applications. In: DeRose, L., de Supinski, B.R., Olivier, S.L., Chapman, B.M., Müller, M.S. (eds.) IWOMP 2014. LNCS, vol. 8766, pp. 58–72. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11454-5_5

    Chapter  Google Scholar 

  15. Peng, Y., Grover, V., et al.: CURD: a dynamic CUDA race detector. In: Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 390–403. ACM (2018)

    Google Scholar 

  16. Serebryany, K., Iskhodzhanov, T.: ThreadSanitizer: data race detection in practice. In: Proceedings of the Workshop on Binary Instrumentation and Applications, WBIA 2009, pp. 62–71. ACM (2009)

    Google Scholar 

  17. Serebryany, K., Potapenko, A., Iskhodzhanov, T., Vyukov, D.: Dynamic race detection with LLVM compiler. In: Khurshid, S., Sen, K. (eds.) RV 2011. LNCS, vol. 7186, pp. 110–114. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29860-8_9

    Chapter  Google Scholar 

  18. Wang, C., Chandrasekaran, S., Chapman, B.: An OpenMP 3.1 validation testsuite. In: Chapman, B.M., Massaioli, F., Müller, M.S., Rorro, M. (eds.) IWOMP 2012. LNCS, vol. 7312, pp. 237–249. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-30961-8_18

    Chapter  Google Scholar 

  19. Zeller, A.: Why Programs Fail: A Guide to Systematic Debugging. Elsevier, Oxford (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrian Schmitz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schmitz, A., Protze, J., Yu, L., Schwitanski, S., Müller, M.S. (2020). DataRaceOnAccelerator – A Micro-benchmark Suite for Evaluating Correctness Tools Targeting Accelerators. In: Schwardmann, U., et al. Euro-Par 2019: Parallel Processing Workshops. Euro-Par 2019. Lecture Notes in Computer Science(), vol 11997. Springer, Cham. https://doi.org/10.1007/978-3-030-48340-1_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-48340-1_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-48339-5

  • Online ISBN: 978-3-030-48340-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics