Abstract
We have developed a communication optimizer that concentrates on stencil communication patterns. This optimizer has been done in the context of the UNH C* compiler that targets distributed-memory MIMD computers. Our work has two distinguishing features:
-
The compiler/optimizer is designed to be highly portable. We achieve this goal by providing efficient support for the optimizations in the run-time library.
-
As well as performing aggregation for messages that share the same source and destination, we employ a specialized store-and-forward protocol that reduces the total number of messages initiated.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
M. Baber. Hypertasking: Automatic array and loop partitioning on the iPSC. In Proceedings of the 24th Hawaii International Conference on Systems Sciences, pages 438–447, January 1991.
M. Barnett, R. Littlefield, D. Payne, and R. van de Geijn. Efficient communication primitives on mesh architectures with hardware routing. In Sixth SIAM Conference on Parallel Processing for Scientific Computing, March 1993.
P. Bjørstad and R. Schreiber. Unstructured grids on SIMD torus machines. In Proceedings of the 1994 Scalable High Performance Computing Conference, pages 658–665, 1994.
Z. Bozkus, A. Choudhary, G. Fox, T. Haupt, S. Ranka, and M. Wu. Compiling Fortran 90D/HPF for distributed memory MIMD computers. Journal of Parallel and Distributed Computing, 21:15–26, 1994.
E. Brewer and B. Kuszmaul. How to get good performance for the CM5 data network. In Proceedings of the 1994 International Parallel Processing Symposium, April 1994..
G. Chandranmenon, R. Russell, and P. Hatcher. Providing an execution environment for C* programs on a Mach-based PC cluster. Technical Report 94–20, University of New Hampshire, 1994.
J. Frankel. A reference description of the C* language. Technical Report TR-253, Thinking Machines Corporation, Cambaridge,MA,1991.
H. Gerndt. Parallelization for Distributed-Memory Multiprocessing Systems. PhD thesis, University Bonn, 1989.
P. Hatcher and M. Quinn. Data-Parallel Programming on MIMD Computers. The MIT Press, 1991.
A. Lapadula and K. Herold. A retargetable C* compiler and run-time library for mesh-connected MIMD multicomputers. Technical Report 92–15, University of New Hampshire, 1992.
J. LaRosa and R. Russell. A dedicated network and streamlined protocol to support UNH C* programs in distributed environments. Technical Report 95–07, University of New Hampshire, 1995.
J. Mason. Optimizing irregular communication in C*. Master’s thesis, University of New Hampshire, 1994.
D. Socha. An approach to compiling single-point iterative programs for distributed memory computers. In Fifth Distributed Memory Computing Conference, pages 1017–1027, 1990.
C. Tseng. An Optimizing Fortran D Compiler for MIMD Distributed-Memory Machines. PhD thesis, Department of Computer Science, Rice University, Houston, TX, January 1993.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer Science+Business Media New York
About this chapter
Cite this chapter
Chappelow, S.W., Hatcher, P.J., Mason, J.R. (1996). Optimizing Data-Parallel Stencil Computations in a Portable Framework. In: Szymanski, B.K., Sinharoy, B. (eds) Languages, Compilers and Run-Time Systems for Scalable Computers. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-2315-4_4
Download citation
DOI: https://doi.org/10.1007/978-1-4615-2315-4_4
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-5979-1
Online ISBN: 978-1-4615-2315-4
eBook Packages: Springer Book Archive