Skip to main content

PTAH Introduction to a new parallel architecture for highly numeric processing

  • Conference paper
  • First Online:
PARLE '92 Parallel Architectures and Languages Europe (PARLE 1992)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 605))

Abstract

This paper proposes a new architectural design for high performance parallel computers: the one-cycle machine. In such a computer the memory access, network access, instruction sequencing, data computation take the same duration: one clock cycle. We first consider the communication network efficiency as the main critical resource. We show that the adaptation of the network performance to the processing element power is more important than the CPU power in itself with respect to the global processing effectiveness. Two guidelines are derived from our analysis and conduct to the design of PTAH. Two simple examples are used to illustrate the interest of PTAH for the execution of numeric applications. Finally, some hardware features are proposed for a PTAH implementation being able to reach the TeraFLOPS.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Réferences

  1. D. Korn, N. Rushfield, “Washcloth Simulation of Three-Dimensional Weather Forecasting Code”, New York University, May 1983.

    Google Scholar 

  2. J.J. Dongarra, “Experimental Parallel Computing Architectures”, North Holland, 1987.

    Google Scholar 

  3. R.W. Hockney, C.R. Jesshope, “Parallel Computer 2”, Adam Hilger, 1998.

    Google Scholar 

  4. W.D. Robb, “The Cray YMP C90 Computer System”, Supercomputing Europe 92, Paris, 1992.

    Google Scholar 

  5. T. Watanabe, “SX-3 Series Architecture & Technology Trend”, Supercomputing Europe 92, Paris, 1992.

    Google Scholar 

  6. “The Connection Machine CM-5 Technical Summary”, Thinking Machines Corporation, Oct 91.

    Google Scholar 

  7. J. Beetem, M. Denneau, D. Weingarten, “The GF11 Supercomputer”, in Proceedings of the 12th Internationnal Symposium on Computer Architecture, IEEE Computer Society, Boston 1985.

    Google Scholar 

  8. “Paragon XP/S Product Overview”, Intel Corporation, 1991.

    Google Scholar 

  9. “The TC2000 Massively Parallel Supercomputer”, in “Parallel Computing: Past, Present and Future”, BBN Advanced Computers Inc, 1990.

    Google Scholar 

  10. R. D. Rettberg, W. R. Crowther, P. P. Corvey and R. S. Tomlinson, “The Monarch Parallel Processor Hardware Design”, Computer, April 91.

    Google Scholar 

  11. J.R. Moulic, “Parallel Systems”, Supercomputing Europe 92, Paris, 1992.

    Google Scholar 

  12. S. Nelson, “Toward TeraFLOP Computing”, Supercomputing Europe 91 Conference, Fevrier 1991.

    Google Scholar 

  13. J.E. Smith, W.C. Hsu, C. Hsiung, Supercomputing 90.

    Google Scholar 

  14. W.J. Dally, “Wire-efficient VLSI Multiprocessor Communications”, 1987 Stanford Conference on Advanced Research in VLSI, 1987, pp 391–415.

    Google Scholar 

  15. D.A. Reed, R.M. Fujimoto, “Multicomputers Networks-Message-based Parallel Processing”, The MIT Press, 1987.

    Google Scholar 

  16. W.C. Athas, C.L. Seitz, “Multicomputers: Message-Passing Concurrent Computers”, COMPUTER, Aug 1988.

    Google Scholar 

  17. C. Germain, J-L. Béchennec, D. Etiemble and J-P. Sansonnet, “A New Communication Design for Massively Parallel Message-Passing Architectures”, IFIP Working Conf. on Decentralized Systems 1989, North-Holland ed.

    Google Scholar 

  18. TMC, The Essential *Lisp Manual, Cambridge MA, 1986

    Google Scholar 

  19. S. F. Nugent, “The iPSC/2 Direct-Connect Communications Technology”, 3rd Conf. on Hypercube Concurrent Computers and Applications, 1988.

    Google Scholar 

  20. W. Crowther, J. Goodhue, E. Starr, R. Thomas, W. Milliken and T. Blackadar “Performance mesureament on a 128-nodes Butterfly Parallel Processor”, proc. of the Int. Conf. on Parallel Processing, pp 450–457, 1985.

    Google Scholar 

  21. C. Germain, J-L Béchennec, D. Etiemble, J-P. Sansonnet, “An interconnection network and a routing scheme for a massively parallel message-passing multicomputer”, 3rd Symposium on Frontiers of Massively Parallel Computation, Oct 8–10 1990, College Park, MD.

    Google Scholar 

  22. H.T. Kung, “Why Systolic Architecture ?”, COMPUTER, Jan 1982.

    Google Scholar 

  23. M.M. Denneau, “The Yorktown Simulation Engine”, ACM/IEEE 19th Desing Automation Conference Proceedings, 1982.

    Google Scholar 

  24. S.J. Hong, R. Nair, “Wire-Routing Machines, New Tools for VLSI Physical Design”, Proceedings of the IEEE, Jan 1983.

    Google Scholar 

  25. K. Batcher, “Design of a Massively Parallel Processor”, IEEE Transaction on Computer, Sep 1980.

    Google Scholar 

  26. H.J. Siegel, L.J. Siegel, F.C. Kemmer, P.T. Muller, H.E. Smalley, S.D. Smith, “PASM: A Partitionable SIMD/MSIMD System for Image Processing and Pattern Recognition”, IEEE Transaction on Computer, Dec 1981.

    Google Scholar 

  27. A. Merigot, S. Bouaziz, P. Clermont, F. Devos, M. Echer, J. Mehat, Y. Ni, “SPHINX un processeur pyramidal massivement parallèle pour la vision artificielle”, 7ième congrès RFIA, Nov 1989.

    Google Scholar 

  28. V. Benes, “Optimal Rearrangeable Multistage Connecting Networks”, Bell System Technical Journal, Vol 43, no 4, Part 2, Jul 1964.

    Google Scholar 

  29. H.S. Stone, “Parallel Processing with the Perfect Shuffle”, IEEE transaction on Computers, Feb 1971.

    Google Scholar 

  30. D.D. Gajski, D.H. Lawrie, D.J. Kuck, and A.H. Sameh, “CEDAR”, IEEE COMPCON'84 Proceedings, March 1, 1984.

    Google Scholar 

  31. E. Denning Dahl, “Mapping and Compilated communication on the Connection Machine System”, Proceedings of the fifth Distributed Memory Computing Conference, Apr 1990.

    Google Scholar 

  32. David Elliot Shaw, “SIMD and MSIMD Variant of NON-VON Supercomputer”, IEEE COMPCON'84 Proceedings, March 1 1984.

    Google Scholar 

  33. D.A. Patterson, “Reduced Instruction Set Computers”, Communication of the ACM, Jan 1985.

    Google Scholar 

  34. Tse-Yun Feng, “A Survey of Interconnection Network”, Computer, December 81.

    Google Scholar 

  35. C. Lawson, R. Hanson, D. Kincaid, F. Krogh, “Basic Linear Algebra Subprograms for Fortran usage”, ACM Transaction on Mathematic Software, 1979.

    Google Scholar 

  36. J.-L. Béchennec, F. Cappello, D. Etiemble, “A 3D Hardware Package for highly Parallel Architectures”, Euromicro 91, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Daniel Etiemble Jean-Claude Syre

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cappello, F., Béchennec, JL., Giavitto, JL. (1992). PTAH Introduction to a new parallel architecture for highly numeric processing. In: Etiemble, D., Syre, JC. (eds) PARLE '92 Parallel Architectures and Languages Europe. PARLE 1992. Lecture Notes in Computer Science, vol 605. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-55599-4_82

Download citation

  • DOI: https://doi.org/10.1007/3-540-55599-4_82

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-55599-5

  • Online ISBN: 978-3-540-47250-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics