Skip to main content

On the Use of BLAS Libraries in Modern Scientific Codes at Scale

  • Conference paper
  • First Online:
Driving Scientific and Engineering Discoveries Through the Convergence of HPC, Big Data and AI (SMC 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1315))

Included in the following conference series:

Abstract

As we approach the Exascale era, computer architectures are evolving ever-greater vector and matrix acceleration units—NVIDIA’s Ampere Tensor Cores, Intel’s AMX, and Arm’s SVE vector instruction set developments are just three recent examples [1, 2, 10]. To exploit these, it is expected that optimised math libraries such as those for dense and sparse linear algebra, will play an increasing role in achieving optimal performance. It is therefore useful to understand which of these functions dominate an application’s runtime, and in particular how this changes with increasing scale. This work aims to provide a contemporary dataset regarding how much dense linear algebra (BLAS) is used in HPC codes at scale. We have analysed several science codes widely used on the UK HPC service, ARCHER (https://www.archer.ac.uk), including CASTEP, CP2K, QuantumESPRESSO, and Nektar++. To capture demands from the AI community, we have additionally traced the training stage of the Convolutional Neural Network (CNN), AlexNet [7]. HPLinpack is also included as a reference, as it exhibits a well-understood BLAS usage pattern. Results from across all the codes show that, unlike HPLinpack, BLAS usage is never more than 25% of the total runtime, even when running at a modest scale (32 nodes of the Arm-based supercomputer, Isambard). This presents limited speedup opportunity when considering Amdahl’s law, and suggests that application developers may need to adjust their algorithms to spend more time in optimised BLAS libraries to capitalise on new architectures and accelerators.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.nektar.info.

  2. 2.

    https://www.quantum-espresso.org.

  3. 3.

    http://www.castep.org.

  4. 4.

    https://www.cp2k.org.

  5. 5.

    https://lammps.sandia.gov.

  6. 6.

    https://github.com/ARM-software/perf-libs-tools.

  7. 7.

    https://github.com/UoB-HPC/perf-libs-tools.

References

  1. NVIDIA A100 Tensor Core GPU Architecture: Unprecedented Acceleration At Every Scale (2020). https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/nvidia-ampere-architecture-whitepaper.pdf

  2. The x86 Advanced Matrix Extension (AMX) Brings Matrix Operations; To Debut with Sapphire Rapids (2020). https://fuse.wikichip.org/news/3600/the-x86-advanced-matrix-extension-amx-brings-matrix-operations-to-debut-with-sapphire-rapids/

  3. Dongarra, J., Hammarling, S., Higham, N., Relton, S., Valero-Lara, P., Zounon, M.: The design and performance of batched BLAS on modern high-performance computing systems. Proc. Comput. Sci. 108, 495–504 (2017). https://doi.org/10.1016/j.procs.2017.05.138

    Article  Google Scholar 

  4. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org

  5. Gustafson, J.L.: Amdahl’s Law. In: Padua, D. (ed.) Encyclopedia of Parallel Computing, vol. xx, pp. 53–60. Springer, US, Boston, MA (2011). https://doi.org/10.1007/978-07-09766-4_77

    Chapter  Google Scholar 

  6. Hennessy, J.L., Patterson, D.A.: A new golden age for computer architecture. Commun. ACM 62(2), 48–60 (2019). https://doi.org/10.1145/3282307

    Article  Google Scholar 

  7. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, NIPS’12, vol. 1, pp. 1097–1105. Curran Associates Inc., Red Hook, NY, USA (2012)

    Google Scholar 

  8. Laguna, I., Marshall, R., Mohror, K., Ruefenacht, M., Skjellum, A., Sultana, N.: A large-scale study of MPI usage in open-source HPC applications. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC’19. Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3295500.3356176

  9. Markidis, S., Chien, S.W.D., Laure, E., Peng, I.B., Vetter, J.S.: NVIDIA tensor core programmability, performance and precision. In: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (2018). https://doi.org/10.1109/IPDPSW.2018.00091

  10. Stephens, N., et al.: The ARM scalable vector extension. IEEE Micro 37(2), 26–39 (2017). https://doi.org/10.1109/mm.2017.35

    Article  Google Scholar 

  11. Turner, A.: UK National HPC Benchmarks. Technical report, EPCC (2016)

    Google Scholar 

  12. Turner, A., McIntosh-Smith, S.: A survey of application memory usage on a national supercomputer: an analysis of memory requirements on archer. In: PMBS@SC (2017)

    Google Scholar 

  13. Turner, A., Sloan-Murphy, D., Sivalingam, K., Richardson, H., Kunkel, J.M.: Analysis of parallel I/O use on the UK national supercomputing service, ARCHER using Cray LASSi and EPCC SAFE. ArXiv:abs/1906.03891 (2019)

Download references

Acknowledgment

This work used the Isambard UK National Tier-2 HPC Service, funded by the EPSRC (EP/P020224/1). We would like to thank Chris Goodyer (Arm) for his work on developing the Arm library tracing tool. We would also like to thank Andy Turner (EPCC), Filippo Spiga (NVIDIA), Phil Hasnip (CASTEP), and Spencer Sherman (Imperial College London) for their expertise on choosing benchmarks, and running each application.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harry Waugh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Waugh, H., McIntosh-Smith, S. (2020). On the Use of BLAS Libraries in Modern Scientific Codes at Scale. In: Nichols, J., Verastegui, B., Maccabe, A.‘., Hernandez, O., Parete-Koon, S., Ahearn, T. (eds) Driving Scientific and Engineering Discoveries Through the Convergence of HPC, Big Data and AI. SMC 2020. Communications in Computer and Information Science, vol 1315. Springer, Cham. https://doi.org/10.1007/978-3-030-63393-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-63393-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-63392-9

  • Online ISBN: 978-3-030-63393-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics