Skip to main content

Scalability in Parallel Processing

  • Chapter
  • First Online:
  • 823 Accesses

Abstract

The objective of this chapter is to discuss the notion of scalability. We start by explaining the notion with an emphasis on modern (and future) large scale parallel platforms. We also review the classical metrics used for estimating the scalability of a parallel platform, namely, speed-up, efficiency and asymptotic analysis. We continue with the presentation of two fundamental laws of scalability: Amdahl’s and Gustafson’s laws. Our presentation considers the original arguments of the authors and reexamines their applicability in today’s machines and computational problems. Then, the chapter discusses more advanced topics that cover the evolution of computing fields (in term of problems), modern resource sharing techniques and the more specific issue of reducing energy consumption. The chapter ends with a presentation of a statistical approach to the design of scalable algorithms. The approach describes how scalable algorithms can be designed by using a “cooperation” of several parallel algorithms solving the same problem. The construction of such cooperations is particularly interesting while solving hard combinatorial problems. We provide an illustration of this last point on the classical satisfiability problem SAT.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   59.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Details are available at http://top500.org

  2. 2.

    http://www.exascale-projects.eu/

  3. 3.

    Irregular algorithms are characterized by non-uniform memory pattern accesses. For such algorithms, we will frequently be in the situation where the data we want to access are not in the caches. Some such well-known irregular algorithms include: Cholesky factorization, finite differences algorithms, agglomerative clustering, Prim’s algorithm, Kruskal’s algorithm, belief propagation.

  4. 4.

    The idea to compare algorithms based on their average running time on a set of representative computational instances is used in international competitions between algorithms. One of the most famous is the SAT competition where one goal is to solve the maximal number of SAT instances given a maximal time limit. SAT refers to the boolean satisfiability problem.

  5. 5.

    This was observed on multicore machines where generations of machines integrate more cores.

  6. 6.

    http://www.cril.univ-artois.fr/~hoessen/penelope.html

References

  1. Gene M. Amdahl. Validity of the single processor approach to achieving large scale computing capabilities. In Proceedings of the April 18–20, 1967, Spring Joint Computer Conference, AFIPS ’67 (Spring), pages 483–485, New York, NY, USA, 1967. ACM.

    Google Scholar 

  2. Krste Asanovic, Rastislav Bodik, James Demmel, Tony Keaveny, Kurt Keutzer, John Kubiatowicz, Nelson Morgan, David Patterson, Koushik Sen, John Wawrzynek, David Wessel, and Katherine Yelick. A view of the parallel computing landscape. Commun. ACM, 52(10):56–67, October 2009.

    Article  Google Scholar 

  3. Marin Bougeret, Pierre-François Dutot, Alfredo Goldman, Yanik Ngoko, and Denis Trystram. Approximating the discrete resource sharing scheduling problem. Int. J. Found. Comput. Sci., 22(3):639–656, 2011.

    Article  MathSciNet  Google Scholar 

  4. Michel Cosnard and Denis Trystram. Algorithmes et Architectures parallèles (english version by Intenat. Thomson publishing 1995). InterEditions, France, 1993.

    Google Scholar 

  5. Pierre-Francois Dutot, Grégory Mounié, and Denis Trystram. Scheduling Parallel Tasks: Approximation Algorithms. In Joseph T. Leung, editor, Handbook of Scheduling: Algorithms, Models, and Performance Analysis, chapter 26, pages 26–1–26–24. CRC Press, 2004.

    Google Scholar 

  6. Richard Brown et al. Report to Congress on Server and Data Center Energy Efficiency: Public Law 109–431. Technical report, Lawrence Berkeley National Laboratory, 2008.

    Google Scholar 

  7. Steven Fortune and James Wyllie. Parallelism in random access machines. In Proceedings of the Tenth Annual ACM Symposium on Theory of Computing, STOC ’78, pages 114–118, New York, NY, USA, 1978. ACM.

    Google Scholar 

  8. Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., New York, NY, USA, 1979.

    MATH  Google Scholar 

  9. Alfredo Goldman, Yanik Ngoko, and Denis Trystram. Malleable resource sharing algorithms for cooperative resolution of problems. In IEEE Congress on Evolutionary Computation, pages 1–8. IEEE, 2012.

    Google Scholar 

  10. Ananth Y. Grama, Anshul Gupta, and Vipin Kumar. Isoefficiency: Measuring the scalability of parallel algorithms and architectures. IEEE Parallel Distrib. Technol., 1(3):12–21, August 1993.

    Article  Google Scholar 

  11. Raymond Greenlaw, H. James Hoover, and Walter L. Ruzzo. Limits to Parallel Computation: P-completeness Theory. Oxford University Press, Inc., New York, NY, USA, 1995.

    Google Scholar 

  12. John L. Gustafson. Reevaluating amdahl’s law. Commun. ACM, 31(5):532–533, May 1988.

    Article  Google Scholar 

  13. Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy Katz, Scott Shenker, and Ion Stoica. Mesos: A platform for fine-grained resource sharing in the data center. In Proceedings of the 8th USENIX Conference on Networked Systems Design and Implementation, NSDI’11, pages 295–308, Berkeley, CA, USA, 2011. USENIX Association.

    Google Scholar 

  14. Bernardo. A. Huberman, Rajan. M. Lukose, and Tad. Hogg. An economic approach to hard computational problems. Science, 27:51–53, 1997.

    Google Scholar 

  15. Jonathan Koomey, Stephen Berard, Marla Sanchez, and Henry Wong. Implications of historical trends in the electrical efficiency of computing. IEEE Ann. Hist. Comput., 33(3):46–54, July 2011.

    Article  MathSciNet  Google Scholar 

  16. Bich C. Le. An out-of-order execution technique for runtime binary translators. SIGPLAN Not., 33(11):151–158, October 1998.

    Article  Google Scholar 

  17. Tao Li and Lizy Kurian John. Run-time modeling and estimation of operating system power consumption. SIGMETRICS Perform. Eval. Rev., 31(1):160–171, June 2003.

    Google Scholar 

  18. Susanta Nanda and Tzi-cker Chiueh. A survey of virtualization technologies. Technical report, SUNY at Stony Brook, 2005.

    Google Scholar 

  19. Nicholas Pippenger. On simultaneous resource bounds. In Proceedings of the 20th Annual Symposium on Foundations of Computer Science, SFCS ’79, pages 307–311, Washington, DC, USA, 1979. IEEE Computer Society.

    Google Scholar 

  20. S. K. Prasad, A. Chtchelkanova, F. Dehne, M. Gouda, A. Gupta, J. Jaja, K. Kant, A. La Salle, R. LeBlanc, A. Lumsdaine, D. Padua, M. Parashar, V. Prasanna, Y. Robert, A. Rosenberg, S. Sahni, B. Shirazi, A. Sussman, C. Weems, and J. Wu. NSF/IEEE-TCPP Curriculum Initiative on Parallel and Distributed Computing - Core Topics for Undergraduates, Version I. Online: http://www.cs.gsu.edu/~tcpp/curriculum/,55pages,USA,2012.

  21. John R. Rice. The algorithm selection problem. Advances in Computers, 15:65–118, 1976.

    Article  Google Scholar 

  22. Walter Tichy. Auto-tuning parallel software: An interview with thomas fahringer: the multicore transformation (ubiquity symposium). Ubiquity, 2014(June):5:1–5:9, June 2014.

    Google Scholar 

  23. Moshe Y. Vardi. Moore’s law and the sand-heap paradox. Commun. ACM, 57(5):5–5, May 2014.

    Article  Google Scholar 

  24. R. Clint Whaley, Antoine Petitet, and Jack Dongarra. Automated empirical optimization of software and the ATLAS project. Parallel Computing, 27(1–2):3–35, 2001.

    Article  Google Scholar 

  25. Dong Hyuk Woo and Hsien-Hsin S. Lee. Extending amdahl’s law for energy-efficient computing in the many-core era. Computer, 41(12):24–31, December 2008.

    Google Scholar 

  26. Wm. A. Wulf and Sally A. McKee. Hitting the memory wall: Implications of the obvious. SIGARCH Comput. Archit. News, 23(1):20–24, March 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Denis Trystram .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ngoko, Y., Trystram, D. (2018). Scalability in Parallel Processing. In: Prasad, S., Gupta, A., Rosenberg, A., Sussman, A., Weems, C. (eds) Topics in Parallel and Distributed Computing. Springer, Cham. https://doi.org/10.1007/978-3-319-93109-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-93109-8_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-93108-1

  • Online ISBN: 978-3-319-93109-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics