Skip to main content
Log in

Scalable, parallel computers: Alternatives, issues, and challenges

  • Published:
International Journal of Parallel Programming Aims and scope Submit manuscript

Abstract

The 1990s will be the era of scalable computers. By giving up uniform memory access, computers can be built that scale over a range of several thousand. These provide highpeak announced performance (PAP), by using powerful, distributed CMOS microprocessor-primary memory pairs interconnected by a high performance switch (network). The parameters that determine these structures and their utility include: whether hardware (a multiprocessor) or software (a multicomputer) is used to maintain a distributed, or shared virtual memory (DSM) environemnt; the power of computing nodes (these improve at 60% per year); the size and scalability of the switch; distributability (the ability to connect to geographically dispersed computers including workstations); and all forms of software to exploit their inherent parallelism. To a great extent, viability is determined by a computer's generality—the ability to efficiently handle a range of work that requires varying processing (from serial to fully parallel), memory, and I/O resources. A taxonomy and evolutionary time line outlines the next decade of computer evolution, included distributed workstations, based on scalability and parallelism. Workstations can be the best scalables.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. G. Bell, Ultracomputers: A Teraflop Before Its Time,Comm. of the ACM 35(8):27–45 (August 1992).

    Article  Google Scholar 

  2. B. Nitzberg and V. Lo, Distributed Shared Memory: A Survey of Issues and Algorithms,Computer, pp. 52–60 (August 1991).

  3. M. D. Hill, What is Scalability?Computer Architecture News 18(4):18–21 (December 1990).

    Article  Google Scholar 

  4. D. Nussbaum and A. Agarwal, Scalability of Parallel Machines,Comm. of the ACM 34(3):57–61 (March 1991).

    Article  MathSciNet  Google Scholar 

  5. S. L. Scott, A Cache Coherence Mechanism for Scalable, Shared-Memory Multiprocessors,Proc. Int'l. Symp. of Shared Memory Multiprocessing, Information Processing Society of Japan, Tokyo, April, pp. 49–59 (1991).

  6. J. L. Gustafson, G. R. Montry, and R. E. Benner, Development of Parallel Methods for a 1024 Processor Hypercube,SIAM J. Sci. Stat. Comput. 9(4):609–638 (July 1988).

    Article  MATH  MathSciNet  Google Scholar 

  7. V. Kumar and A. Gupta, Analyzing Scalabilty of parallel Algorithms and Architectures, TR 91-18, Department of Computer Science, University of Minnesota (January 1992).

  8. X. Sun and D. T. Rover, Scalability of Parallel Algorithm-Machine Combinations, Technical Report of the Ames Laboratory, Iowa State, IS 5057, UC 32 (April 1991).

  9. A. H. Karp, Programming for ParallelismComputer, pp. 43–57 (May 1987).

  10. J. Worlton, MPP: All Things Considered, is it More Cost-Effective?, Worlton and Associates Technical Report No. 42, Salt Lake City, Utah (May 1992).

  11. D. H. Bailey, E. Barscz, L. Dagun, and H. D. Simon, NAS Parallel Benchmark Result, RNR Technical Report RNR-92-002, NASA Ame Research Center (December 1992).

  12. D. H. Bailey, Twelve Ways to Fool the Masses When Giving Performance Result on Parallel Computers,Supercomputing Review, pp. 54–55 (August 1991).

  13. J. Worlton, Be Sure The MPP Bandwagon is going Somewhere Before You Jump on Board,High Performance Computing Review, p. 41 (Winter 1992).

  14. A. H. Karp, K. Miura, and H. Simon, 1992 Gordon Bell Prize Winners,Computer 26(1):77–82 (January 1993).

    Google Scholar 

  15. G. Bell, The Future of High Performance Computers in Science and Engineering,Comm. of the ACM 32(9):1091–1101 (September 1989).

    Article  Google Scholar 

  16. M. Lin, R. Tsang, D. H. C. Du, A. E. Kleitz, and S. Saraoff, Performance Evaluation of the CM5 Interconnect Network,IEEE CompCon (Spring 1993).

  17. G. Bell, Three Decades of Multiprocessors, Richard Rashid (ed.),CMU Computer Science: 25th Anniversary Commemorative, ACM Press, Addison-Wesley Publishing, Reading, Massachusetts, pp. 3–27 (1991).

    Google Scholar 

  18. J. L. Hennessy and D. A. Patterson,Computer Architecture: A Quantitative Approach, Morgan Kaufman, San Mateo, California (1990).

    Google Scholar 

  19. J. L. Hennessy,Scalable Multiprocessors and the DASH Approach, University Video Communication, Stanford, California (1992).

    Google Scholar 

  20. D. Lenoski, K. Gharacharloo, J. Laudon, A. Gupta, J. Hennessy, M. Horowitz, and M. Lam, Design of Scalable Shared-Memory Multiprocessors: The DASH Approach,ACM COMPCON (February 1990).

  21. J. P. Singh, W. D. Weber, and A. Gupta, SPLASH: Stanford Parallel Applications for Shared-Memory,Computer Architecture News 20(1):5–44 (March 1992).

    Article  Google Scholar 

  22. P. J. Denning, Working Sets Past and Present,IEEE Transactions on Software Engineering SE-6(1):64–84 (January 1980).

    Google Scholar 

  23. A. Gupta, T. Joe, and Per Stenstrom,Comparative Performance Evaluation of Cache-Coherent NUMA and COMA Architectures, Computer Systems Laboratory, Stanford California (1993).

    Google Scholar 

  24. E. Hagersten, Toward Scalable Cache Only Memory Architectures, Ph.D. Dissertation, The Royal Institute of Technology, Stockholm Sweden (October 1992).

    Google Scholar 

  25. S. Frank, H. Burkhardt, L. Lee, N. Goodman, B. I. Marguilies, and D. D. Weber, Multiprocessor Digital Data Processing System, U.S. Patent No. 5,055,999 (December 27, 1987).

  26. KSR-1 Technical Summary, Kendall Square Research, Waltham, Massachusetts (1992).

  27. C. Sietz, Mosaic C: AQn Experimental Fine-Grain Multicomputer,25th Anniversary of the Founding of INRIA, Springer-Verlag (to be published).

  28. M. Berry, G. Cybenko, and J. Larson, Scientific Benchmark Characterization,Parallel Computing 17:1173–1194 (1991).

    Article  Google Scholar 

  29. R. W. Hockney and C. R. Jesshope,Parallel Computers 2, Adam Hilger, Bristol (1988).

    MATH  Google Scholar 

  30. S. Zhou, J. Strumm, L. Li, and D. Wortman, Heterogeneous Distributed Shared Memory, IEEE Transactions on Parallel and Distributed Systems3(5):540–554 (September 1992).

    Article  Google Scholar 

  31. K. Li and R. Schafer, A Hypercube Shared Virtual Memory System,Int'l. Conf. on Parallel System (1989).

  32. R. N. Zucker and J. L. Baer, A Performance Study of Memory Consistency Models,Proc. of the 19th Annual Int'l. Symp. on Computer Architecture, pp. 2–12 (1992).

Download references

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bell, G. Scalable, parallel computers: Alternatives, issues, and challenges. Int J Parallel Prog 22, 3–46 (1994). https://doi.org/10.1007/BF02577791

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02577791

Key words

Navigation