Skip to main content

Certifying Sequential Consistency of Machine Learning Accelerators

  • Conference paper
  • First Online:
Formal Methods and Software Engineering (ICFEM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14308))

Included in the following conference series:

  • 201 Accesses

Abstract

Machine learning accelerators (MLAs) are increasingly important in many applications such as image and video processing, speech recognition, and natural language processing. To achieve the needed performances and power efficiencies, MLAs are highly concurrent. The correctness of MLAs hinges on the concept of sequential consistency, i.e., the concurrent execution of a program by an MLA must be equivalent to a sequential execution of the program. In this paper, we certify the sequential consistency of modular MLAs using theorem proving. We first provide a formalization of the MLAs and define their sequential consistency. After that, we introduce our certification methodology based on inductive theorem proving. Finally, we demonstrate the feasibility of our approach through the analysis of the NVIDIA Deep Learning Accelerator and the Versatile Tensor Accelerator.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bertot, Y., Castéran, P.: Interactive Theorem Proving and Program Development: Coq’Art: the Calculus of Inductive Constructions. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-662-07964-5

  2. Cyrluk, D., Rajan, S., Shankar, N., Srivas, M.K.: Effective theorem proving for hardware verification. In: Kumar, R., Kropf, T. (eds.) TPCD 1994. LNCS, vol. 901, pp. 203–222. Springer, Heidelberg (1995). https://doi.org/10.1007/3-540-59047-1_50

    Chapter  Google Scholar 

  3. Damm, W., Pnueli, A.: Verifying out-of-order executions. In: Advances in Hardware Design and Verification. IAICT, pp. 23–47. Springer, Boston, MA (1997). https://doi.org/10.1007/978-0-387-35190-2_3

    Chapter  Google Scholar 

  4. Hoare, C.A.R.: Communicating sequential processes. Commun. ACM 21(8), 666–677 (1978)

    Article  Google Scholar 

  5. Jouppi, N.P., et al.: In-datacenter performance analysis of a tensor processing unit. In: Proceedings of the 44th Annual International Symposium on Computer Architecture, pp. 1–12 (2017)

    Google Scholar 

  6. Kaufmann, M., Moore, J.S.: ACL2: an industrial strength version of Nqthm. In: Proceedings of 11th Annual Conference on Computer Assurance. COMPASS 1996, pp. 23–34. IEEE (1996)

    Google Scholar 

  7. Keller, R.M.: Formal verification of parallel programs. Commun. ACM 19(7), 371–384 (1976)

    Article  MathSciNet  Google Scholar 

  8. Kroening, D., Paul, W.J., Mueller, S.M.: Proving the correctness of pipelined micro-architectures. In: MBMV, pp. 89–98 (2000)

    Google Scholar 

  9. Leino, K.R.M.: Dafny: an automatic program verifier for functional correctness. In: Clarke, E.M., Voronkov, A. (eds.) LPAR 2010. LNCS (LNAI), vol. 6355, pp. 348–370. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-17511-4_20

    Chapter  Google Scholar 

  10. Moreau, T., et al.: A hardware-software blueprint for flexible deep learning specialization. IEEE Micro 39(5), 8–16 (2019)

    Article  Google Scholar 

  11. Nvidia: Nvidia deep learning accelerator (2018). http://nvdla.org/primer.html

  12. Pnueli, A., Shankar, N., Singerman, E.: Fair synchronous transition systems and their liveness proofs. In: Ravn, A.P., Rischel, H. (eds.) FTRTFT 1998. LNCS, vol. 1486, pp. 198–209. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0055348

    Chapter  Google Scholar 

  13. Rao, N., et al.: Intel Nervana: a next-generation neural network processor. In: Proceedings of the 2017 IEEE Hot Chips Symposium on High Performance Chips (HOTCHIPS), pp. 1–28. IEEE, Cupertino, CA, USA, August 2017

    Google Scholar 

  14. Sawada, J., Hunt, W.A. Jr.: Processor verification with precise exceptions and speculative execution. In: CAV, vol. 98, pp. 135–146 (1998)

    Google Scholar 

  15. Vijayaraghavan, M., Chlipala, A., Dave, N.: Modular deductive verification of multiprocessor hardware designs. In: Kroening, D., Păsăreanu, C. (eds.) Computer Aided Verification. CAV 2015. LNCS, San Francisco, CA, USA, July 18–24, 2015, Proceedings, Part II 27, vol. 9207, pp. 109–127. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21668-3_7

Download references

Acknowledgment

This research is partially supported by a gift from Intel Corporation.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Huan Wu , Fei Xie or Zhenkun Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, H., Xie, F., Yang, Z. (2023). Certifying Sequential Consistency of Machine Learning Accelerators. In: Li, Y., Tahar, S. (eds) Formal Methods and Software Engineering. ICFEM 2023. Lecture Notes in Computer Science, vol 14308. Springer, Singapore. https://doi.org/10.1007/978-981-99-7584-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-7584-6_10

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-7583-9

  • Online ISBN: 978-981-99-7584-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics