Skip to main content

What Do We Need to Know About Parallel Algorithms and Their Efficient Implementation?

  • Chapter
  • First Online:
Topics in Parallel and Distributed Computing

Abstract

The computing world is changing and all devices—from mobile phones and personal computers to high-performance supercomputers—are becoming parallel. At the same time, the efficient usage of all the opportunities offered by modern computing systems represents a global challenge. Using full potential of parallel computing systems and distributed computing resources requires new knowledge, skills and abilities, where one of the main roles belongs to understanding key properties of parallel algorithms. What are these properties? What should be discovered and expressed explicitly in existing algorithms when a new parallel architecture appears? How to ensure efficient implementation of an algorithm on a particular parallel computing platform? All these as well as many other issues are addressed in this chapter. The idea that we use in our educational practice is to split a description of an algorithm into two parts. The first part describes algorithms and their properties. The second part is dedicated to describing particular aspects of their implementation on various computing platforms. This division is made intentionally to highlight the machine-independent properties of algorithms and to describe them separately from a number of issues related to the subsequent stages of programming and executing the resulting programs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 59.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The results were obtained in Lomonosov Moscow State University with the financial support of the Russian Science Foundation (agreement N 14-11-00190). The research is carried out using the equipment of the shared research facilities of HPC computing resources at Lomonosov Moscow State University.

References

  1. Summer Supercomputing Academy. http://academy.hpc-russia.ru/en. Cited 26 Jan 2018

  2. Adinets, A.V., Bryzgalov, P.A., Voevodin, V.V., Zhumatii, S.A., Nikitenko, D.A., Stefanov, K.S.: Job Digest: an approach to dynamic analysis of job characteristics on supercomputers. Computational Methods and Software Development: New Computational Technologies, vol. 13, pp. 160–166 (2012)

    Google Scholar 

  3. Antonov, A., Voevodin, Vad., Voevodin, Vl., Teplov, A.: A study of the dynamic characteristics of software implementation as an essential part for a universal description of algorithm properties. In 24th Euromicro International Conference on Parallel, Distributed, and Network-Based Proceedings, pp. 359–363 (2016)

    Google Scholar 

  4. Antonov, A., Voevodin, V., Dongarra, J.: Algowiki: an Open encyclopedia of parallel algorithmic features. Supercomputing Frontiers and Innovations, vol. 2, no. 1, pp. 4–18 (2015)

    Google Scholar 

  5. Big Data and Extreme-scale Computing (BDEC). http://www.exascale.org/bdec. Cited 26 Jan 2018

  6. Computer Science Curricula 2013 (CS2013). http://ai.stanford.edu/users/sahami/CS2013. Cited 26 Jan 2018

  7. Dense matrix multiplication. http://algowiki-project.org/en/Densematrixmultiplication. Cited 26 Jan 2018

  8. Dense matrix multiplication example, version 1. https://github.com/srcc-msu/CDER-2016/blob/master/dgemm/mpi_1d_grid.c. Cited 26 Jan 2018

  9. Dense matrix multiplication example, version 2. https://github.com/srcc-msu/CDER-2016/blob/master/dgemm/mpi_2d_grid.c. Cited 26 Jan 2018

  10. Dongarra, J., Beckman, P., Moore, T., Aerts, P., Aloisio, G., Andre, J., Barkai, D., Berthou, J., Boku, T., Braunschweig, B., et al.: The international exascale software project roadmap. International Journal of High Performance Computing Applications, vol. 25, no. 1, pp. 3–60 (2011)

    Article  Google Scholar 

  11. Supercomputing Education in Russia, Supercomputing Consortium of the Russian Universities, Tech. Rep. (2012) http://hpc.msu.ru/files/HPC-Education-in-Russia.pdf. Cited 26 Jan 2018

  12. Future Directions in CSE Education and Research. Workshop Sponsored by the Society for Industrial and Applied Mathematics (SIAM) and the European Exascale Software Initiative (EESI-2), Tech. Rep. (2015) http://wiki.siam.org/siag-cse/images/siag-cse/f/ff/CSE-report-draft-Mar2015.pdf. Cited 26 Jan 2018

  13. Open Encyclopedia of Parallel Algorithmic Features. http://algowiki-project.org/en. Cited 26 Jan 2018

  14. Parallel computing collective test bank “SIGMA”. https://sigma.parallel.ru/BankTest/Start/index.php?lang=en. Cited 26 Jan 2018

  15. Prasad, S.K., Chtchelkanova, A., Dehne, F., Gouda, M., Gupta, A., Jaja, J., Kant, K., La Salle, A., LeBlanc, R., Lumsdaine, A., Padua, D., Parashar, M., Prasanna, V., Robert, Y., Rosenberg, A., Sahni, S., Shirazi, B., Sussman, A., Weems, C., and Wu, J.: NSF/IEEE-TCPP Curriculum Initiative on Parallel and Distributed Computing — Core Topics for Undergraduates, Version I. 55 pages (2012) http://www.cs.gsu.edu/~tcpp/curriculum. Cited 26 Jan 2018

  16. Sadovnichy, V., Tikhonravov, A., Voevodin, V., Opanasenko, V.: Lomonosov: Supercomputing at Moscow State University. In: Contemporary High Performance Computing: From Petascale toward Exascale, ser. Chapman & Hall/CRC Computational Science. Boca Raton, United States: Boca Raton, United States, pp. 283–307 (2013)

    Google Scholar 

  17. Scalasca. http://www.scalasca.org. Cited 26 Jan 2018

  18. Tau Performance System. http://www.paratools.com/tau. Cited 26 Jan 2018

  19. Vampir — Performance Optimization. https://www.vampir.eu. Cited 26 Jan 2018

  20. Voevodin, V.: Mathematical Foundations of Parallel Computing. World Scientific Publishing Co., Series in computer science, vol. 33 (1992)

    Google Scholar 

  21. Voevodin, V., Gergel, V.: Supercomputing education: the third pillar of HPC. Computational Methods and Software Development: New Computational Technologies, vol. 11, no. 2, pp. 117–122 (2010)

    Google Scholar 

  22. Voevodin, V., Voevodin, Vl.: Parallel Computing. BHV-Petersburg, St. Petersburg (2002)

    Google Scholar 

  23. Wirth, N.: Algorithms + Data Structures = Programs. Prentice Hall PTR (1978)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vladimir Voevodin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Voevodin, V., Antonov, A., Voevodin, V. (2018). What Do We Need to Know About Parallel Algorithms and Their Efficient Implementation?. In: Prasad, S., Gupta, A., Rosenberg, A., Sussman, A., Weems, C. (eds) Topics in Parallel and Distributed Computing. Springer, Cham. https://doi.org/10.1007/978-3-319-93109-8_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-93109-8_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-93108-1

  • Online ISBN: 978-3-319-93109-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics