Abstract
The message-passing paradigm is now widely accepted and used mainly for inter-process communication in distributed memory parallel systems. However, one of its disadvantages is the high cost associated with the data exchange. Therefore, in this paper, we describe a message-passing optimization technique based on the exploitation of single-assignment and constant information properties to reduce the number of communications. Similar to the more general partial evaluation approach, technique evaluates local and remote memory operations when only part of the input is known or available; it further specializes the program with respect to the input data. It is applied to the programs, which use a distributed single-assignment memory system. Experimental results show a considerable speedup in programs running in computer systems with slow interconnection networks. We also show that single assignment memory systems can have better network latency tolerance and the overhead introduced by its management can be hidden.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Amarasinghe, S.-P., Lam, M.-S.: Communication optimization and code generation for distributed memory machines. In: Proceedings of the SIGPLAN 1993 Conference on Programming Language Design and Implementation (1993)
Arvind, Nikhil, R.-S., Pingali, K.-K.: I-Structures: Data Structures for Parallel Computing. ACM Transaction on PLS 11(4), 598–632 (1989)
Banerjee, P., Chandy, J.-A., Gupta, M., Holm, J.-G., Lain, A., Palermo, D.-J., Ramaswamy, S., Su, E.: The PARADIGM compiler for distributed-memory message multicomputers. In: proceedings of the first international workshop on parallel processing (1994)
Bruck, J., Dolev, D., Ho, C.-T., Roşu, M.-C., Strong, R.: Efficient Message-passing Interface (MPI) for Parallel Computing on Clusters of Workstations. Journal of Parallel and Distributed Computing 40(1), 19–34 (1997)
Champeaux, D., Lea, D., Faure, P.: Object-Oriented System Development. Addison-Wesley, Reading (1993) ISBN 0-201-56355-X
Cristóbal-Salas, A., Tchernykh, A.: I-Structure Software Cache for distributed applications, Dyna, Year 71, No. 141. pp. 67 – 74, Medellín, March 2004 (2004) ISSN 0012-7353
Cristóbal-Salas, A., Tchernykh, A., Gaudiot, J.-L., Lin, W.Y.: Non-Strict Execution in Parallel and Distributed Computing. In: International Journal of Parallel Programming, vol. 31(2), pp. 77–105. Kluwer Academic Publishers, New York (2003)
Cristóbal-Salas, A., Tchernykh, A., Gaudiot, J.-L.: Incomplete Information Processing for Optimization of Distributed Applications. In: Proceedings of the Fourth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2003), pp. 277–284. ACIS (2003)
Faraj, A.-A.: Communication characteristics in the NAS parallel benchmarks. Master thesis, college of arts and sciences, Florida State University (October 2002)
Garza-Salazar, D.-A., Bohm, W.: D-OSC: A sisal compiler for distributed memory machines. In: proceedings of the International Workshop on PCS (1997)
Jones, N.-D.: An introduction to Partial Evaluation. ACM computing surveys 28(3) (1996)
Karwande, A., Yuan, X., Lowenthal, D.-K.: CC-MPI: A Compiled Communication Capable MPI Prototype for Ethernet Switched Clusters. In: ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), pp. 95–106 (2003)
Kielmann, T., Hofman, F.-H., Bal, H.-E., Plaat, A., Bhoedjang, A.-F.: MagPIe: MPI’s Collective Communication Operations for Clustered Wide Area Systems. In: 7th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP 1999) (1999)
Lahaut, D., Germain, C.: Static Communications in Parallel Scientic Programs. In: Halatsis, C., Philokyprou, G., Maritsas, D., Theodoridis, S. (eds.) PARLE 1994. LNCS, vol. 817, pp. 262–276. Springer, Heidelberg (1994)
Lin, W.-Y., Gaudiot, J.-L.: I-Structure Software Cache - A split-Phase Transaction runtime cache system. In: Proceedings of PACT 1996, Boston, MA (1996)
McGraw, J., Skedzielewski, S., Allan, S., Grit, D., Oldehoeft, R., Glauert, J., Dobes, I., Hohensee, P.: SISAL-Streams and Iterations in a Single Assignment Language, Language Reference Manual, version 1. 2. Technical Report TR M-146, University of California - Lawrence Livermore Laboratory (1985)
Emil, M.: Haar wavelet transform (2004), http://dmr.ath.cx/gfx/haar/index.html
Mogensen, Sestoft, P.: Partial evaluation. In: Kent, A., Williams, J.G. (eds.) Encyclopedia of Computer Science and Technology, vol. 37, pp. 247–279 (1997)
Moh, S., Yu, C., Lee, B., Youn, H.-Y., Han, D., Lee, D.: 4-ary Tree-Based Barrier Synchronization for 2-D Meshes without Nonmember Involvement. IEEE Transactions on Computers 50(8) (2001)
Ogawa, H., Matsuoka, S.: OMPI: Optimizing MPI programs using Partial Evaluation. In: Proceedings of the 1996 IEEE/ACM Supercomputing Conference, Pittsburgh (1996)
Yuan, X., Melhem, R., Gupta, R.: Algorithms for Supporting Compiled Communication. IEEE Transactions on Parallel and Distributed Systems 14(2), 107–118 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cristóbal-Salas, A., Chernykh, A., Rodríguez-Alcantar, E., Gaudiot, JL. (2005). Exploiting Single-Assignment Properties to Optimize Message-Passing Programs by Code Transformations. In: Grelck, C., Huch, F., Michaelson, G.J., Trinder, P. (eds) Implementation and Application of Functional Languages. IFL 2004. Lecture Notes in Computer Science, vol 3474. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11431664_1
Download citation
DOI: https://doi.org/10.1007/11431664_1
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-26094-3
Online ISBN: 978-3-540-32038-8
eBook Packages: Computer ScienceComputer Science (R0)