Skip to main content

Language and run-time support for network parallel computing

  • Conference paper
  • First Online:
Languages and Compilers for Parallel Computing (LCPC 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1033))

Abstract

Network parallel computing is the use of diverse computing resources interconnected by general purpose networks to run parallel applications. This paper describes NetFx, an extension of the Fx compiler system which uses the Fx model of task parallelism to distribute and manage computations across the sequential and parallel machines of a network. A central problem in network parallel computing is that the compiler is presented with a heterogeneous and dynamic target. Our approach is based on a novel run-time system that presents a simple communication interface to the compiler, yet uses compiler knowledge to customize communication between tasks executing over the network. The run-time system is designed to support complete applications developed with different compilers and parallel program generators. It presents a standard communication interface for point-to-point transfer of distributed data sets between tasks. This allows the compiler to be portable, and enables communication generation without knowledge of exactly how the tasks will be mapped at run-time and what low level communication primitives will be used. The compiler also generates a set of custom routines, called address computation functions, to translate between different data distributions. The run-time system performs the actual communication using a mix of generic and custom address computation functions depending on run-time parameters like the type and number of nodes assigned to the communicating tasks and the data distributions of the variables being communicated. This mechanism enables the run-time system to exploit compile-time optimizations, and enables the compiler to manage foreign tasks that use non-standard data distributions. We outline several important applications of network parallel computing and describe the NetFx programming model and run-time system.

This research was sponsored in part by the Advanced Research Projects Agency/CSTO monitored by SPAWAR under contract N00039-93-C-0152, in part by the National Science Foundation under Grant ASC-9318163, and in part by a grant from the Intel Corporation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Arabe, J., Beguelin, A., Lowekamp, B., E. Seligman, M. S., and Stephan, P. Dome: Parallel programming in a heterogeneous multi-user environment. Tech. Rep. CMU-CS-95-137, Carnegie Mellon University, April 1995.

    Google Scholar 

  2. Arnould, E. A., Bitz, F. J., Cooper, E. C., Kung, H. T., Sansom, R. D., and Steenkiste, P. A. The design of nectar: A network backplane for heterogeneous multicomputers. In Proceedings of the Third International Conference on Architecutural Support for Programming Languages and Operating Systems (Apr. 1989), ACM.

    Google Scholar 

  3. Avalini, B., Choudhary, A., Foster, I., Krishnaiyer, R., and Xu, M. A data transfer library for communicating data-parallel tasks. Case center tech report, Syracuse University, December 1995. To Appear.

    Google Scholar 

  4. Blelloch, G. Nesl: A nested data-parallel language. CMU-CS 92-103, School of Computer Science, Carnegie Mellon University, 1992.

    Google Scholar 

  5. Dinda, P. A., and O'Hallaron, D. R. The impact of address relation caching on the performance of deposit model communication. In Third Workshop on Languages, Compilers, and Run-Time Systems for Scalable Computers (1995). To appear.

    Google Scholar 

  6. Geist, A., Beguelin, A., Dongarra, J., Jiang, W., Mancheck, R., and Sunderam, V.PVM: Parallel Virtual Machine. MIT Press, Cambridge, Massachusetts, 1994.

    Google Scholar 

  7. Gross, T., O'Hallaron, D., and Subhlok, J. Task parallelism in a High Performance Fortran framework. IEEE Parallel & Distributed Technology, 3 (1994), 16–26.

    Google Scholar 

  8. Haines, M., Hes, B., Mehrotra, P., and Rosendale, J. V.Runtime support for data parallel tasks. In Fifth Symposium on the Frontiers of Massively Parallel Computation (1995).

    Google Scholar 

  9. High Performance Fortran Forum. High Performance Fortran Language Specification, Version 1.0, May 1993.

    Google Scholar 

  10. Homer, P. T., and Schlichting, R. D. A software platform for constructing scientific applications from heterogeneous resources. Journal of Parallel and Distributed Computing 21, 3 (June 1994), 301–315.

    Article  Google Scholar 

  11. Kale, L. V., Bhandarkar, M., Jagathesan, N., and Krishnan, S. Converse: An interoperable framework for parallel programming. Tech. rep., University of Illinois, 1995. http://charm.cs.uiuc.edu/research/interop.html.

    Google Scholar 

  12. Khokhar, A., Prasanna, V., Shaaban, M., and Wang, C. Heterogeneous computing: Challenges and opportunities. Computer 26, 6 (June 1993), 18–27.

    Article  Google Scholar 

  13. McKinley, K. S., Singhai, S. K., Weaver, G. E., and Weems, C. C. Compiler architectures for heterogeneous systems. In Eighth International Workshop on Languages and Compilers for for Parallel Computing (1995). To appear.

    Google Scholar 

  14. McRae, G., Russell, A., and Harley, R.CIT Photochemical Airshed Model — Systems Manual. Carnegie Mellon University, Pittsburgh, PA, and California Institute of Technology, Pasadena, CA, Feb. 1992.

    Google Scholar 

  15. Narten, T. Internet routing. In Proceedings of SIGCOMM'89 (September 1989), Austin, TX.

    Google Scholar 

  16. Shewchuk, J. R., and Ghattas, O. A compiler for parallel finite elemeent methods with domain-decomposed unstructured meshes. In Proceedings of the Seventh International Conference on Domain Decomposition Methods in Scientific and Engineering Computing (1994), College Park, PA.

    Google Scholar 

  17. Stichnoth, J., O'Hallaron, D., and Gross, T. Generating communication for array statements: Design, implementation, and evaluation. Journal of Parallel and Distributed Computing 21, 1 (1994), 150–159.

    Article  Google Scholar 

  18. Subhlok, J., Stichnoth, J., O'Hallaron, D., and Gross, T. Exploiting task and data parallelism on a multicomputer. In ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (San Diego, CA, May 1993), pp. 13–22.

    Google Scholar 

  19. Subhlok, J., and Vondran, G. Optimal mapping of sequences of data parallel tasks. In Proceedings of the Fifth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (Santa Barbara, CA, July 1995), pp. 134–143.

    Google Scholar 

  20. Yang, B., Webb, J., Stichnoth, J., O'Hallaron, D., and Gross, T. Do&merge: Integrating parallel loops and reductions. In Sixth Annual Workshop on Languages and Compilers for Parallel Computing (Portland, Oregon, Aug 1993).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Chua-Huang Huang Ponnuswamy Sadayappan Utpal Banerjee David Gelernter Alex Nicolau David Padua

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dinda, P.A., O'Hallaron, D.R., Subhlok, J., Webb, J.A., Yang, B. (1996). Language and run-time support for network parallel computing. In: Huang, CH., Sadayappan, P., Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1995. Lecture Notes in Computer Science, vol 1033. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0014222

Download citation

  • DOI: https://doi.org/10.1007/BFb0014222

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60765-6

  • Online ISBN: 978-3-540-49446-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics