Skip to main content
Log in

Raising the level of abstraction for developing message passing applications

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Gropp W, Lusk E, Skjellum A (1999) Using MPI: portable parallel programming with the message-passing interface. MIT Press, Cambridge, pp 1–395

    Google Scholar 

  2. Arora R, Bangalore P (2009) A framework for raising the level of abstraction of explicit parallelization. In: International conference on software engineering (ICSE) companion 2009, pp 339–342

    Chapter  Google Scholar 

  3. Kiczales G, Lamping J, Mendhekar A, Maeda C, Lopes C, Loingtier J-M, Irwin J (1997) Aspect-oriented programming. In: ECOOP’97—object-oriented programming, 11th European conference. LNCS, vol 1241, pp 220–242

    Google Scholar 

  4. Skjellum A, Bangalore P, Gray J, Bryant B (2004) Reinventing explicit parallel programming for improved engineering of high performance computing software. In: ICSE 2004 workshop on software engineering for high performance computing system applications

    Google Scholar 

  5. Czarnecki K, Eisenecker U (2000) Generative programming: methods, tools, and applications. Addison-Wesley, Reading

    Google Scholar 

  6. Singh A, Schaeffer J, Szafron D (1998) Experience with parallel programming using code templates. Concurr Pract Exp 10:91–120

    Article  MATH  Google Scholar 

  7. Siu S, Singh A (1997) Design patterns for parallel computing using a network of processors. In: International symposium on high performance distributed computing (HPDC’97), pp 293–304

    Google Scholar 

  8. Chadha N (2002) A Java implemented design-pattern-based system for parallel programming. Master of Science thesis, University of Mannitoba

  9. Goswami D, Singh A, Preiss B (2001) Building parallel applications using design patterns. In: Advances in software engineering: topics in comprehension, evolution and evaluation. Springer, New York

    Google Scholar 

  10. Chalabine M, Kessler C (2005) Parallelisation of sequential programs by invasive composition and aspect weaving. In: 6th international workshop on advanced parallel processing technologies (APPT’05). LNCS, pp 131–140

    Chapter  Google Scholar 

  11. Sobral J (2006) Incrementally developing parallel applications with AspectJ. In: 20th IEEE international parallel & distributed processing symposium (IPDPS’06), p 214b

    Google Scholar 

  12. Rabhi FA, Cai H, Tompsett B (2000) A skeleton-based approach for the design and implementation of distributed virtual environments. In: 5th international symposium on software engineering for parallel and distributed systems. IEEE Computer Society Press, New York, pp 13–20

    Chapter  Google Scholar 

  13. Arora R, Bangalore P (2008) Using aspect-oriented programming for checkpointing a parallel application. In: Parallel and distributed processing techniques and applications (PDPTA 2008), pp 955–961

    Google Scholar 

  14. Bangalore P (2007) Generating parallel applications for distributed memory systems using aspects, components, and patterns. In: Proceedings of the 6th workshop on aspects, components, and patterns for infrastructure software (ACP4IS ’07), vol 219

    Google Scholar 

  15. El-Ghazawi T, Cantonnet F (2002) UPC performance and potential: a NPB experimental study. In: Proceedings of the 2002 ACM/IEEE conference on supercomputing, pp 1–26

    Google Scholar 

  16. Coarfa C, Dotsenko Y, Eckhardt JL, Mellor-Crummey J (2003) Co-array Fortran performance and potential: an NPB experimental study. In: The 16th international workshop on languages and compilers for parallel computing (LCPC 2003), pp 177–193

    Google Scholar 

  17. Yelick K, Hilfinger P, Graham S, Bonachea D, Su J, Kamil A, Datta K, Colella P, Wen T (2007) Parallel languages and compilers: perspective from the titanium experience. Int J High Perform Comput Appl 21(3):266–290

    Article  Google Scholar 

  18. Bal HE, Kaashoek MF, Tanenbaum AS (1992) Orca: a language for parallel programming of distributed systems. IEEE Trans Softw Eng 18(3):190–205

    Article  Google Scholar 

  19. Feo JT, Cann DC, Oldehoeft RR (1990) A report on the Sisal language project. J Parallel Distrib Comput 10(4):349–366

    Article  Google Scholar 

  20. Steele G (2006) Parallel programming and parallel abstractions in fortress. In: Functional and logic programming, 8th international symposium (FLOPS 2006), proceedings. LNCS, vol 3945, p 1

    Google Scholar 

  21. Mehta P, Amaral JN, Szafron D (2006) Is MPI suitable for a generative design-pattern system? Parallel Comput 32(7–8):616–626

    Article  Google Scholar 

  22. Püschel M, Moura JMF, Singer B, Xiong J, Johnson J, Padua D, Veloso M, Johnson RW (2004) Spiral: a generator for platform-adapted libraries of signal processing algorithms. Int J High Perform Comput Appl 18(1):21–45

    Article  Google Scholar 

  23. Goodale T, Allen G, Lanfermann G, Massó J, Radke T, Seidel E, Shalf J (2003) The cactus framework and toolkit: design and applications. In: High performance computing for computational science (VECPAR 2002), vol 2565/2003, pp 15–36

    Google Scholar 

  24. Catanzaro BC, Fox A, Keutzer K, Patterson D, Su B, Snir M, Olukotun K, Hanrahan P, Chafi H (2010) Ubiquitous parallel computing from Berkeley, Illinois, and Stanford. IEEE MICRO 30(2):41–55

    Article  Google Scholar 

  25. Roychoudhury S, Jouault F, Gray J (2007) Model-based aspect weaver construction. In: 4th international workshop on language engineering (ATEM), held at MODELS 2007, Nashville, TN, pp 117–126

    Google Scholar 

  26. Baxter I (1992) Design maintenance systems. Commun ACM 35(4):73–89

    Article  Google Scholar 

  27. Schordan M, Quinlan D (2003) A source-to-source architecture for user-defined optimizations. In: Proceedings joint modular languages conference. Lecture notes in computer science, pp 214–223

    Chapter  Google Scholar 

  28. Mernik M, Heering J, Sloane AM (2005) When and how to develop domain-specific languages. ACM Comput Surv 37(4):316–344

    Article  Google Scholar 

  29. AMMA (ATLAS Model Management Architecture) platform. http://wiki.eclipse.org/AMMA

  30. Arora R, Bangalore P, Mernik M (2010) A technique for non-invasive application-level checkpointing. J Supercomput. doi:10.1007/s11227-010-0383-5

    Google Scholar 

  31. Hi-PaL API. http://www.cis.uab.edu/ccl/index.php/Hi-PaL

  32. Quinn M (2004) Parallel programming in C with MPI and OpenMP. McGraw-Hill, New York

    Google Scholar 

  33. Sequential version of Game of Life. http://www.cis.uab.edu/courses/cs432/spring2010/software/life.c

  34. Chung TJ (2002) Computational fluid dynamics, 1st edn. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  35. Wilkinson B, Allen M (1998) Parallel programming: techniques and applications using networked workstations. Prentice Hall, New York, pp 1–431

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ritu Arora.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Arora, R., Bangalore, P. & Mernik, M. Raising the level of abstraction for developing message passing applications. J Supercomput 59, 1079–1100 (2012). https://doi.org/10.1007/s11227-010-0490-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-010-0490-3

Keywords

Navigation