Skip to main content
Log in

Models of parallel computation: a survey and classification

  • Review Article
  • Published:
Frontiers of Computer Science in China Aims and scope Submit manuscript

Abstract

In this paper, the state-of-the-art parallel computational model research is reviewed. We will introduce various models that were developed during the past decades. According to their targeting architecture features, especially memory organization, we classify these parallel computational models into three generations. These models and their characteristics are discussed based on three generations classification. We believe that with the ever increasing speed gap between the CPU and memory systems, incorporating non-uniform memory hierarchy into computational models will become unavoidable. With the emergence of multi-core CPUs, the parallelism hierarchy of current computing platforms becomes more and more complicated. Describing this complicated parallelism hierarchy in future computational models becomes more and more important. A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity, thus allowing more complicated models with more parameters to be adopted. Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Valiant L. A bridging model for parallel computation. Communication of ACM, 1990, 33:103–111

    Article  Google Scholar 

  2. Cook S, Reckhow R. Time Bounded Random Access Computers. Journal of Computer and Systems Sciences, 1973, 7:354–375

    Article  MATH  MathSciNet  Google Scholar 

  3. Maggs B M, et al. Models of Parallel Computation: A Survey and Synthesis. In: Proceedings of 28th Hawaii Int. Conf. on System Sciences. Wailea, USA. 1995, 61–70

  4. Fortune S, Wyllie J C. Parallelism in random access computers. In: Proceedings of the Tenth Annual ACM Symposium on Theory of Computing, USA. 1978, 114–118

  5. Goldschlager L M. A Universal interconnection pattern for parallel computers. Journal of the ACM, 1982, 29(4):1073–1086

    Article  MATH  MathSciNet  Google Scholar 

  6. Culler D, et al. LogP: towards a realistic model of parallel computation. In: Proceedings of PPoPP’93. USA. May, 1993, 1–12

  7. JáJá Joseph F, Kwan W R. The block distributed memory model. IEEE Transaction on Distributed and Parallel Systems, 1996, 7(8):830–840

    Article  Google Scholar 

  8. Aggarwal A, et al. Communication complexity of PRAMs. Theoretical Computer Science, 1990, 71(1):3–28

    Article  MATH  MathSciNet  Google Scholar 

  9. Aggarwal A, ALpern B, Chandra A, et al. A model for hierarchical memory. In: Proceedings of the 19th Annual ACM Symp. on Theory of Computing. New York: ACM, 1987, 305–314

    Google Scholar 

  10. Aggarwal A, Alpern B., Chandra A., et al. Hierarchical Memory with Block Transfer. In: Proceedings of the 28th IEEE Symp. on Foundations of Computer Science. USA. 1987, 204–216

  11. Alpern B, et al. The uniform memory hierarchy model of computation. Algorithmica, 1994, 12(2/3):72–109

    Article  MATH  MathSciNet  Google Scholar 

  12. Li Z Y, et al. Models and resource metrics for parallel and distributed computation. In: Proceedings of 28th Hawaii International Conference on System Sciences. USA. 1995, Vol. 2, 51–60

    Google Scholar 

  13. Zhang Y Q. Performance optimizations on parallel numerical software package and study on memory complexity. Dissertation of Doctoral Degree. Beijing, P.R. China: Institute of Software, Chinese Academy of Sciences, 2000 (in Chinese)

    Google Scholar 

  14. Qiao X Z, et al. HPM: a hierarchical model for parallel computations. International Journal of High Performance Computing and Networking (IJHPCN), 2004, 1(1/2/3):117–127

    Article  Google Scholar 

  15. Dymond P W, et al. Parallel RAMs with owned global memory and deterministic context-free language recognition. Journal of the ACM, 2000, 47(1):16–45

    Article  MATH  MathSciNet  Google Scholar 

  16. Cole R., et al. The APRAM: incorporating asynchrony into the PRAM model. In: Proceedings of the 1st Annual ACM SPAA’89. USA. 1989, 169–178

  17. Gibbons P B. A more practical PRAM model. In: Proceedings of the 1st Annual ACM Symposium on Parallel Algorithms and Architectures, USA. 1989, 158–168

  18. Valiant L G. General Purpose Parallel Architectures. In: Leeuwen J V, ed. Handbook of theoretical computer science (vol. A): algorithms and complexity. Cambridge, MA, USA. MIT Press, 1991, 943–973

    Google Scholar 

  19. Mehlhorn K, Vishkin U. Randomized and deterministic simulations of PRAMs by parallel computers with restricted granularity of parallel memories. Acta Informatica, 1984, 21(4):339–374

    Article  MATH  MathSciNet  Google Scholar 

  20. Gibbons P, et al. The QRQW PRAM: accounting for contention in parallel algorithms. In: Proceedings of the 5th annual ACM-SIAM SODA’94, USA. 1994, 638–648

  21. Gibbons P B, Matias Y, Ramachandran V. Can a shared-memory model serve as a bridging model for parallel computation? In: Proceedings of SPAA’97, USA, 1997, 72–83

  22. Aggarwal A, et al. Communications complexity of PRAMs, Theoretical Computer Science, 1990, 71(1):3–28

    Article  MATH  MathSciNet  Google Scholar 

  23. Aggarwal A, etc. On Communication Latencies in PRAM Computations, Proc. of the 1st Annual ACM Symposium on Parallel Algorithms and Architectures, USA, 1989, 11–21

  24. Mansour Y, et al. Trade-offs between communication throughput and parallel time. In: Proceedings of the 26th Ann. ACM Symposium on Theory of Computing, Canada, 1994, 372–381

  25. de la Torre P, Kruskal C P. Towards a single model of efficient computation in real parallel computers. Future Generation Computer Systems, Elsevier Science Publishers, 1992, 8(4):395–408

    Article  Google Scholar 

  26. Heywood T, Ranka S. A practical hierarchical model of parallel computation: I. the Model. Journal of Parallel and Distributed Computing, 1992, 16(3):212–232

    Article  MATH  MathSciNet  Google Scholar 

  27. Campbell D K G.. A survey of models of parallel computation. Technical Report No.YCS-97-278. Department of Computer Science, University of York, March 1997

  28. Leiserson C, Maggs B. Communication efficient parallel algorithms for distributed random access computers. Algorithmica, 1988, Vol. 3:53–77

    Article  MATH  MathSciNet  Google Scholar 

  29. Upfal E. Efficient schemes for parallel communication. Journal of the ACM, 1984, 31(3):507–517

    Article  MATH  MathSciNet  Google Scholar 

  30. Bar-Noy A, Kipnis S. Designing broadcasting algorithms in the postal model for message passing systems. In: Proceedings of the SPAA’92, USA, 1992, 13–22

  31. Snyder L. Type architectures, shared memory, and the corollary of modest potential. Annu. Review of Computer Science, 1986, Vol. 1:289–317

    Article  Google Scholar 

  32. Hill J, McColl B, Stefanescu D, et al. BSPlib: the BSP Programming Library, Parallel Computing, 1998, 24(14):1947–1980

    Article  Google Scholar 

  33. BSP Worldwide: http://www.bsp-worldwide.org

  34. McColl W F, Tiskin A. Memory-efficient matrix multiplication in the BSP model. Algorithmica, 1999, 24(3/4):287–297

    Article  MATH  MathSciNet  Google Scholar 

  35. Chen G L, Xu J B. Communication strategy of butterfly computation on LogP model. Chinese Journal of Computer, 1997, 20(8):695–701 (in Chinese)

    Google Scholar 

  36. Chen G L, Li X F, Huang W M. The design and analysis of parallel FFT algorithm on three parallel computational models. Journal of Software, 1996, 17:57–63 (in Chinese)

    Google Scholar 

  37. Li X F, Shou B. Improvement of LogP model and optimization of FFT algorithm. Journal of computer research and development, 1996, 33(6) (in Chinese)

  38. Gu N J, Li W, Liu J. Fibonacci series-based multicast algorithm. Chinese Journal of Computer, 2002, 25(4):365–372 (in Chinese)

    MathSciNet  Google Scholar 

  39. Alexandrov A, Ionescu M, Schauser K E, et al. LogGP: incorporating long messages into the LogP model-one step closer towards a realistic model for parallel computation. In: Proceedings of 7th Annual Symposium on Parallel Algorithms and Architecture, USA, 1995, 95–105

  40. Ino F, Fujimoto N, Hagihara K. LogGPS: a parallel computational model for synchronization analysis. In: Proceedings of the PPoPP’01, USA, 2001, 133–142

  41. Morirz C A, Frank M I. LoGPC: modeling network contention in message passing programs. In: Proceedings of SIGMETRICS’98, USA, 1998, 254–263

  42. Campbell D K G. Clumps: a candidate model of efficient, general purpose parallel computation. PhD Thesis. Department of Computer Science, University of Exeter, October, 1994

  43. Ji Y C, Ding W Q, Chen G L, et al. A realistic parallel computational model. Chinese Journal of Computer, 2001, 24(4):437–441 (in Chinese)

    Google Scholar 

  44. Yan Y, Zhang X D, Song Y S. An effective and practical performance prediction model for parallel computing on nondedicated heterogeneous NOW. Journal of Parallel and Distributed Computing, 1996, 37(2):63–80

    Google Scholar 

  45. Luccio F, Pagli L. A model of sequential computation with pipelined access to memory. Theory of Computing Systems, Springer New York, 1993, 26(4):343–356

    MATH  MathSciNet  Google Scholar 

  46. Vitter J, Shriver E. Algorithms for parallel memory II: hierarchical multi-level memories. Algorithmica, 1994, 12(2/3):148–169

    Article  MATH  MathSciNet  Google Scholar 

  47. Cameron K, Sun X H. Quantifying locality effect in data access delay: memory logP. In: Proceedings of the 2003 IEEE International Parallel and Distributed Processing Symposium, France, 2003, 48b

  48. Cameron K W, et al. Lognp and Log3p: accurate analytical model of point-to-point communication in distributed systems. IEEE Transactions on Computers, 2007 (in press)

  49. Cameron K W, Ge R. Predicting and evaluating distributed communication performance. In: Proceedings of the 2004 ACM/IEEE conference on Supercomputing, USA, 2004, 43

  50. Zhang Y Q, et al. Memory complexity in high performance computting. In: Proceedings of the Third International Conference on High Performance Computing in Asia-Pacific Region, Singapore, 1998, 142–151

  51. Snyder L. Type architectures, shared memory, and the corollary of modest potential. Annu. Review of Computer Science, 1986, Vol. 1:289–317

    Article  Google Scholar 

  52. Niedermeier R, Rossmanith P. PRAM’s towards realistic parallelism: BRAM’s. In: Proceedings of the 10th International Symposium on Fundamentals of Computation Theory, Springer-Verlag Lecture Notes in Computer Science, Germany, 1995, Vol. 965, 363–373

    Google Scholar 

  53. Hambrusch S E, Khokhar A. C3: a parallel model for coarse-grained computers. Journal on Parallel and Distributed Computing, 1996, 32(2):139–154

    Article  Google Scholar 

  54. Zhang Y Q. DRAM(h): a parallel computation model for high performance numerical computing. Chinese Journal of Computers, 2003, 26(12):1660–1670

    MathSciNet  Google Scholar 

  55. Cooperman G, et al. Static performance evaluation for memory-bound computing: the MBRAM model. In: Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, 2004

  56. Browne S, et al. A portable programming interface for performance evaluation on modern processors. The International Journal of High Performance Computing Applications, 2000, 14(3):189–204

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chen Guoliang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, Y., Chen, G., Sun, G. et al. Models of parallel computation: a survey and classification. Front. Comput. Sc. China 1, 156–165 (2007). https://doi.org/10.1007/s11704-007-0016-1

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-007-0016-1

Keywords

Navigation