Skip to main content

Design and implementation of a general purpose parallel programming system

  • Conference paper
  • First Online:
  • 147 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1067))

Abstract

There are many non-scientific general purpose applications that could benefit from the modest use of parallelism. Due to the lack of good programming support tools, general purpose applications are unable to make use of parallel computation in workstation networks. In this paper, we present a model for general purpose parallel computation called the Composite Model, and its implementation as a set of portable language extensions. We then present an example of a language extended with composite constructs. Finally, we describe the compiler technology that allows composite programs to run effectively on workstation clusters.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. Auerbach, A. Goldberg, G. Goldszmidt, A. Gopal, M. Kennedy, J. Russell, and S. Yemini. Concert/C Specification and Reference, Definition of a Language for Distributed C Programming. IBM T. J. Watson Research Center, January 1995.

    Google Scholar 

  2. G. Agha and C. Houck. HAL: A high level actor language and its distributed implementation. In Proceedings of the 21st International Conference on Parallel Processing, pages 158–165, 1992.

    Google Scholar 

  3. A. Appel. Compiling with Continuations. Cambridge University Press, 1992.

    Google Scholar 

  4. G. Blelloch. NESL: A nested data parallel language. Technical Report CMU-CS-92-103, CMU, 1990.

    Google Scholar 

  5. W. Brainerd. Programmer's Guide to Fortran 90. McGraw-Hill Book Co., 1990.

    Google Scholar 

  6. G. Chaitin, M. Auslander, A. Chandra, J. Cocke, M. Hopkins, and P. Markstein. Register allocation via coloring. Computer Languages, Jan 1981.

    Google Scholar 

  7. M. Chu-Carroll. Programming Language and Compiler Tools for General Purpose Parallelism. PhD thesis, University of Delaware, May 1996.

    Google Scholar 

  8. Nicholas Carriero and David Gelernter. How to Write Parallel Programs. The MIT Press, Cambridge, Mass, 1990.

    Google Scholar 

  9. A. Chien. Concurrent Aggregates: Supporting Modularity in Massively Parallel Programs. MIT Press, 2nd edition, 1992.

    Google Scholar 

  10. M. Carroll and L. Pollock. Composites: trees for data parallel programming. In 1994 International Conference on Computer Languages. IEEE, 1994.

    Google Scholar 

  11. W. D. Hillis. The Connection Machine. MIT Press, 1985.

    Google Scholar 

  12. S. Hiranandani, K. Kennedy, and C. W. Tseng. An overview of the fortran-D programming system. In Proceedings of the Fourth Workshop on Languages and Compilers for Parallel Computing, 1991.

    Google Scholar 

  13. P. Hatcher and M. Quinn. Data parallel programming on MIMD computers. MIT Press, 1992.

    Google Scholar 

  14. M. Jones and P. Hudak. Implicit and explicit parallel programming in Haskell. Technical Report YALEU/DCS/RR-982, Yale University, 1993.

    Google Scholar 

  15. K. Landry and J. Arthur. Achieving asynchronous speedup while preserving synchronous semantics: an implementation of instruction footprinting in Linda. In Proceedings of the 1994 International Conference on Computer Languages, pages 55–63. IEEE, 1994.

    Google Scholar 

  16. J. Larus, B. Richards, and G. Viwsanathan. C**: A large grain, object-oriented, dataparallel programming language. Technical report, University of Wisconsin, 1992.

    Google Scholar 

  17. S. Murer, J. Feldman, and C. Lim. pSather: Layered extensions to an object-oriented language for efficient parallel computation. Technical report, ICSI, University of California at Berkeley, 1993.

    Google Scholar 

  18. C. Makowski and L. Pollock. Efficient register allocation via parallel graph coloring. In Proceedings of the ACM Symposium on Applied Computing, Programming Languages Track, February 1995.

    Google Scholar 

  19. MPI: A message-passing interface standard. Technical report, Message Passing Interface Forum, May 1994.

    Google Scholar 

  20. M. Rosing, R. Schnabel, and R. Weaver. The DINO parallel programming language. Journal of Parallel and Distributed Computing, 13(1):30–42, Sept 1991.

    Google Scholar 

  21. H. Schmidt. Data-parallel object-oriented programming. In Proceedings of the Fifth Australian Supercomputing Conference, pages 263–272, 1992.

    Google Scholar 

  22. V. S. Sundcram, G. S. Geist, J. Dongarra, and R. Mancheck. PVM concurrent computing system. evolution, experiences, and trends. Parallel Computing, 20(4), April 1994.

    Google Scholar 

  23. C*Reference Manual. Thinking Machines Corp., 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Heather Liddell Adrian Colbrook Bob Hertzberger Peter Sloot

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Chu-Carroll, M., Pollock, L.L. (1996). Design and implementation of a general purpose parallel programming system. In: Liddell, H., Colbrook, A., Hertzberger, B., Sloot, P. (eds) High-Performance Computing and Networking. HPCN-Europe 1996. Lecture Notes in Computer Science, vol 1067. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61142-8_589

Download citation

  • DOI: https://doi.org/10.1007/3-540-61142-8_589

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61142-4

  • Online ISBN: 978-3-540-49955-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics